18F will work to understand how GSA websites and systems are currently structured, evaluate user needs and challenges, and conduct a review of GSA’s overall digital presence. 18F will then define a strategy and roadmap for how to address the most pressing challenges with the current information architecture of GSA’s websites, ultimately driving towards a simplified site structure that better resonates with users.
Following the path analysis, 18F will prepare a synthesis of findings as well as preliminary recommendations identifying the next steps towards developing the requirements for FAS to acquire a solution for OSC.
During this phase, 18F will provide GSA OSC with the following:
weekly-update
directory.We'll use Git to pull down and manage our code base. There are many excellent tutorials for getting started with git, so we'll defer to them here. We'll assume you have cloned our repository and are now within it:
git clone https://github.com/18F/osc-website-pa.git
cd osc-website-pa
We use Docker to get a local environment running
quickly.
Download
and install the runtime compatible with your system. Note that Docker for
Windows requires Windows 10; use
Docker Toolbox on
older Windows environments. Docker will manage out PHP dependencies, get
apache running, and generally allow us to run an instance of our application
locally. We'll be using the
bash-friendly scripts in bin
, but they
wouldn't need to be modified substantially for Windows or other environments.
Our first step is to run
bin/composer install
This command will start by building a Docker image with the PHP modules we
need, unless the image already exists. It will then use
Composer to install dependencies from our
composer.lock
file. We can ignore the warning about running as root, as the
"root" in question is the root user within the container. Should we need to
add dependencies in the future, we can use bin/composer require
as described
in Composer's docs.
Next, we can start our application:
docker-compose up
This will start up the database (MySQL) and then run our bootstrap script to install Drupal. The initial installation and configuration import will take several minutes, but we should see status updates in the terminal.
After we see a message about apache2 -D FOREGROUND
, we're good to go.
Navigate to http://localhost:8080/ and log in as the
root user (username and password are both "root").
To stop the service, press ctrl-c
in the terminal. The next time we start
it, we'll see a similar bootstrap process, but it should be significantly
faster.
As the service runs, we can directly modify the PHP files in our app and see our changes in near-real time.
This codebase's theme is a subtheme of the U.S. Web Design System theme. Accordingly, its overrides are stored in /web/themes/custom/osc
.
Our style changes are all within the context of the osc
"theme", so we'll
start by getting there:
cd web/themes/custom/osc
If this is the first time we're editing a theme, we next need to install all of the relevant node modules:
npm install
Finally, we'll start our "watch" script:
npm run build:watch
As long as that command is running, it'll watch every .scss
file in the sass/
folder for changes, compiling and saving CSS in the assets/css/
folder every time you save a change to a .scss
file.
Now, in a separate Terminal window and/or your favorite text editor, you can make changes to web/themes/custom/osc/sass/uswds.scss
(or _variables.scss
) and have your changes saved.
Within the bin
directory, there are a handful of helpful scripts to make
running drupal
, drush
, etc. within the context of our Dockerized app
easier. As noted above, they are written with bash in mind, but should be easy
to port to other environments.
By default, we don't use S3 when running Drupal locally.
We don't recommend this, but if you need to
simulate the S3 environment, we need to add our credentials into the
VCAP_SERVICES
environment variable. Edit docker-compose.yml
and insert
something similar to the following above "user-provided":
"s3": [{
"name": "osc-storage",
"credentials": {
"access_key_id": "SECRET",
"bucket": "SECRET",
"region": "SECRET",
"secret_access_key": "SECRET"
}
}],
And also add
S3_BUCKET: 'SECRET'
S3_REGION: 'SECRET'
under the environment:
line.
To find the values we're using in cloud.gov, use
cf env osc-web
As with other edits to the local secrets, extra care should be taken when exporting your config and checking this data into git, lest those configuration files contain the true secret values rather than dummy "SECRET" strings.
Making configuration changes to the application comes in roughly eight small steps:
To get the latest code, we can fetch
it from GitHub.
git fetch origin
git checkout origin/master
Alternatively:
git checkout master
git pull origin master
We then create a "feature" branch, meaning a branch of development that's focused on adding a single feature. We'll need to name the branch something unique, likely related to the task we're working on (perhaps including an issue number, for example).
git checkout -b 333-add-the-whatsit
If we are installing a new module or otherwise updating our dependencies, we next use composer. For example:
bin/composer require drupal/some-new-module
See the "Removing dependencies" section below for notes on that topic; it's a bit different than installation/updates.
If we're making admin changes (including enabling any newly installed modules), we'll need to start our app locally.
docker-compose down # stop any running instance
docker-compose up # start a new one with our code
Then navigate to http://localhost:8080 and log in as the root/root. Modify whatever settings desired, which will modify them in your local database. We'll next need to export those configurations to the file system:
bin/drupal config:export
We're almost done! We next need to review all of the changes and commit those that are relevant. Your git tool will have a diff viewer, but if you're using the command line, try
git add -p
to interactively select changes to stage for the commit. Once the changes are staged, commit them, e.g. with
git commit -v
Be sure to add a descriptive commit message. Now we can send the changes to GitHub:
git push origin 333-add-the-whatsit
And request a review in GitHub's interface.
We'll also treat some pieces of content similar to configuration -- we want to deploy it with the code base rather than add/modify it in individual environments. The steps for this are very similar to the Config workflow:
The first two steps are identical to the Config workflow, so we'll skip to the third. Start the application:
docker-compose up
Then log in as root (password: root). Create or edit content (e.g. Aggregator feeds, pages, etc.) through the Drupal admin.
Next, we'll export this content via Drush:
# Export all entities of a particular type
bin/drush default-content-deploy:export [type-of-entity e.g. aggregator_feed]
# Export individual entities
bin/drush default-content-deploy:export [type-of-entity] --entity-id=[ids e.g. 1,3,7]
Then, we'll review all of the changes and commit those that are relevant.
Notably, we're expecting new or modified files in web/sites/default/content
.
After committing, we'll sent to GitHub and create a pull request as with
config changes.
As we add modules to our site, they're rolled out via configuration synchronization. This'll run the installation of new modules, including setting up database tables. Unfortunately, removing modules isn't as simple as deleting the PHP lib and deactivating the plugin. Modules and themes need to be fully uninstalled, which will remove their content from the database and perform other sorts of cleanup. Unfortunately, to do that, we need to have the PHP lib around to run the cleanup.
Our solution is to have a step in our bootstrap script which uninstalls modules/themes prior to configuration import. To do this, we'll need to keep the PHP libs around so that the uninstallation hooks can be called. After we're confident that the library is uninstalled in all our environments, we can also remove it from the composer dependencies.
See the module:uninstall
and theme:uninstall
steps of the bootstrap script
to see how this is implemented.
Updating dependencies through Composer is simple, though somewhat slow. First, we should spin down our local install:
docker-compose down
Then, we run the
update
command:
bin/composer update [name-of-package, e.g. drupal/core]
After crunching away a while, you should see (e.g. via git status
) that the
composer.lock
file has changed. Note that this command doesn't modify
composer.json
-- it will only update the package in a way that's
compatible. If you need to upgrade a major version
(i.e. a backward-incompatible release), use the
require
command, e.g.
bin/composer require drupal/core:9.*
After installing the update, we should spin up our local instance
docker-compose up
and browse around http://localhost:8080/ to make
sure nothing's obviously broken. We shouldn't expect to see anything amiss if
we've just update
d, but need to be more careful around major version
changes.
We should then proceed with steps five through eight (exporting the config, committing, sending to GitHub, etc.). Even though we haven't actively modified any of the configurations, the updated libraries may have generated new ones which would be good to capture.
web/sites/default/xxx
won't go awayDrupal's installation changes the directory permissions for
web/sites/default
, which can prevent git from modifying these files. As
we're working locally, those permissions restrictions aren't incredibly
important. We can revert them by granting ourselves "write" access again. In
unix environments, we can run
chmod u+w web/sites/default
As Docker is managing our environment, it's relatively easy to blow away our database and start from scratch.
docker-compose down -v
Generally, down
spins down the running environment but doesn't delete any
data. The -v
flag, however, tells Docker to delete our data "volumes",
clearing away all the database files.
We prefer deploying code through a continuous integration system. This ensures reproducibility and allows us to add additional safeguards. Regardless of environment, however, our steps for deploying code are more or less the same:
cf
executable (this can be done once)Follow the Cloud Foundry
instructions for
installing the cf
executable. This command-line interface is our primary
mechanism for interacting with cloud.gov.
In a continuous integration environment, we'll always check out a fresh copy
of the code base, but if deploying manually, it's import to make a new, clean
checkout of our repository to ensure we're not sending up additional files.
Notably, using git status
to check for a clean environment is not enough;
our .gitignore
does not match the .cfignore
so git's status output isn't a
guaranty that there are no additional files. If deploying manually, it makes
sense to create a new directory and perform the checkout within that
directory, to prevent conflicts with our local checkout.
git clone https://github.com/18F/osc-website-pa.git
As we don't need the full repository history, we could instead use an optimized version of that checkout:
git clone https://github.com/18F/osc-website-pa.git --depth=1
We'll also want to change our directory to be inside the repository.
cd osc-website-pa
An easy way to do this is to run the deploy-cloudgov.sh
script.
It should create the services you need (if they are not already created),
wait until the services are up, and then launch the app and tell you
what URL you should go to.
As a part of this process, some secrets are generated, like the initial
root password. If you want, you can override this by saying:
export ROOT_USER_PASS=yourreallygr3atpassw0rd.
and then running the
deploy-cloudgov.sh
script.
Our preferred platform-as-a-service is cloud.gov, due to its FedRAMP-Authorization. Cloud.gov runs the open source Cloud Foundry platform, which is very similar to Heroku. See cloud.gov's excellent user docs to get acquainted with the system.
We'll assume you're already logged into cloud.gov. From there,
cf apps
will give a broad overview of the current application instances. We expect two "osc-web" instances and one "osc-cronish" worker in our environments, as described in our manifest files.
cf app osc-web
will give us more detail about the "web" instances, specifically CPU, disk, and memory usage.
cf logs osc-web
will let us attach to the emitted apache logs of our running "osc-web" instances.
If we add the --recent
flag, we'll instead get output from our recent log
history (and not see new logs as they come in). We can use these logs to debug
500 errors. Be sure to look at cloud.gov's logging
docs (particularly, how to use Kibana) for
more control.
If necessary, we can also ssh
into running instances. This should generally
be avoided, however, as all modifications will be lost on next deploy. See the
cloud.gov docs on the topic for more
detail -- be sure to read the step about setting up the ssh environment.
cf ssh osc-web
While the database isn't generally accessible outside the app's network, we
can access it by setting up an SSH tunnel, as described in the
cf-service-connect plugin.
Note that the osc-web
and osc-cronish
instances don't have a mysql
client (aside
from PHP's PDO); sshing into them likely won't help.
Of course, there are many more useful commands. Explore the cloud.gov user docs to learn about more.
As our secrets are stored in a cloud.gov "user-provided service", to add new
ones (or rotate existing secrets), we'll need to call the
update-user-provided-service
command. It can't be updated incrementally,
however, so we'll need to set all of the secrets (including those that remain
the same) at once.
To grab the previous versions of these values, we can run
cf env osc-web
and look in the results for the credentials of our "osc-secrets" service (it'll be
part of the VCAP_SERVICES
section). Then, we update our osc-secrets
service
like so:
cf update-user-provided-service osc-secrets -p '{"SAMPLE_ACCOUNT":"Some Value", "SAMPLE_CLIENT":"Another value", ...}'
We use the Cloud Foundry's Multi-buildpack to allow us to install a mysql client (essential for Drush). This also requires we specify our PHP buildpack, which is unfortunate as it means we can't rely on the cloud.gov folks to deploy it for us. Luckily, updating the PHP buildpack is easy and we can check the latest version cloud.gov has tested.
First, we'll find the version number by querying cloud.gov.
cf buildpacks
The output will include a PHP buildpack with version number, e.g.
php-buildpack-v4.3.51.zip
. This refers to the upstream (Cloud Foundry)
buildpack version, so we'll update our multi-buildpack.yml
accordingly:
buildpacks:
# We need the "apt" build pack to install a mysql client for drush
- https://github.com/cloudfoundry/apt-buildpack#v0.1.1
- https://github.com/cloudfoundry/php-buildpack#v4.3.51
We can also review cloud.gov's release notes to see which buildpacks have been updated, though it's not as timely.