This code-base is the website for the Ministry of Justice which hosts Civil and Family Procedure Committee Rules content only.
Nb.
README.md
is located in.github/
A visual overview of the architectural layout of the development application.
The application uses Docker. This repository provides two separate local test environments, namely:
Where docker compose
provides a pre-production environment to develop features and apply upgrades, Kubernetes allows
us to test and debug our deployments to the Cloud Platform.
In a terminal, move to the directory where you want to install the application. You may then run:
git clone https://github.com/ministryofjustice/justice-gov-uk.git
Change directories:
cd justice-gov-uk
Next, depending on the environment you would like to launch, do one of the following.
This environment has been set up to develop and improve the application.
The following make command will get you up and running.
It creates the environment, starts all services and opens a command prompt on the container that houses our PHP code,
the service is called php-fpm
:
make
During the make
process, the Dory proxy will attempt to install. You will be guided though an installation, if needed.
You will have five services running with different access points. They are:
Nginx
http://justice.docker/
PHP-FPM
make bash
On first use, the application will need initializing with the following command.
composer install
Node
This service watches and compiles our assets, no need to access. The output of this service is available on STDOUT.
When working with JS files in the src
directory it can be useful to develop from inside the node container.
Using a devcontainer will allow the editor to have access to the node_modules
directory, which is good for intellisense and type-safety.
When using a devcontainer, first start the required services with make
and then open the project in the devcontainer.
Be sure to keep an eye on the node container's terminal output for any laravel mix errors.
The folder src/components
is used for when it makes sense to keep a group of scss/js/php files together.
The folder src/components/post-meta
is an example where php is required to register fields in the backend, and js is used to register fields in the frontend.
MariaDB
Internally accessed by PHP-FPM on port 3306
PHPMyAdmin
http://justice.docker:9191/
Login details located in docker-compose.yml
Jekyll
Transforms the markdown docs in .github/pages
to HTML.
The output is served at http://pages.justice.docker
There is no need to install application software on your computer.
All required software is built within the services and all services are ephemeral.
There are multiple volume mounts created in this project and shared across the services. The approach has been taken to speed up and optimise the development experience.
This environment is useful to test Kubernetes deployment scripts.
Local setup attempts to get as close to development on Cloud Platform as possible, with a production-first approach.
sudo nano /etc/hosts
... on a new line, add:127.0.0.1 justice.local
Once the above requirements have been met, we are able to launch our application by executing the following make command:
make local-kube
The following will take place:
deploy/config/local/cluster.yml
kubectl apply -f deploy/local
kubectl get pods -w
Access the running application here: http://justice.local/
In the MariaDB YAML file you will notice a persistent volume claim. This will assist you in keeping application data, preventing you from having to reinstall WordPress every time you stop and start the service.
Most secrets are managed via GitHub settings
It is the intention that WordPress keys and salts are auto generated, before the initial GHA build stage. Lots of testing occurred yet the result wasn't desired; dynamic secrets could not be hidden in the log outputs. Due to this, secrets are managed in settings.
# Make interaction a little easier; we can create repeatable
# variables, our namespace is the same name as the app, defined
# in ./deploy/development/deployment.tpl
#
# If interacting with a different stack, change the NSP var.
# For example;
# - production, change to 'justice-gov-uk-prod'
# Set some vars, gets the first available pod
NSP="justice-gov-uk-dev"; \
POD=$(kubectl -n $NSP get pod -l app=$NSP -o jsonpath="{.items[0].metadata.name}");
# Local interaction is a little different:
# - local, change NSP to `default` and app to `justice-gov-uk-local`
NSP="default"; \
POD=$(kubectl -n $NSP get pod -l app=justice-gov-uk-local -o jsonpath="{.items[0].metadata.name}");
After setting the above variables (via copy -> paste -> execute
) the following blocks of commands will work
using copy -> paste -> execute
too.
# list available pods and their status for the namespace
kubectl get pods -n $NSP
# watch for updates, add the -w flag
kubectl get pods -w -n $NSP
# describe the first available pod
kubectl describe pods -n $NSP
# monitor the system log of the first pod container
kubectl logs -f $POD -n $NSP
# monitor the system log of the fpm container
kubectl logs -f $POD -n $NSP fpm
# open an interactive shell on an active pod
kubectl exec -it $POD -n $NSP -- ash
# open an interactive shell on the FPM container
kubectl exec -it $POD -n $NSP -c fpm -- ash
The test suites for this project use:
Codeception collects and shares best practices and solutions for testing PHP web applications.
The wp-browser library provides a set of Codeception modules and middleware to enable the testing of WordPress sites, plugins and themes.
WP_Mock is used in unit tests to mock WordPress functions and classes.
So far, only unit tests have been written. The unit tests are located in the spec
directory.
To run the unit tests duting development, use the following commands:
make bash
, then composer test:unit
. Or, to watch for changes, use composer test:watch
.
Create a bucket with the following settings:
eu-west-2
Create a deployment with the following settings:
To restrict access to the Amazon S3 bucket follow the guide to implement origin access control (OAC) https://repost.aws/knowledge-center/cloudfront-access-to-amazon-s3
For using u user's keys, create a user with a policy similar to the following:
{
"Sid": "s3-bucket-access",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name"
}
An access key can then be used for testing actions related to the S3 bucket, use env vars:
When deployed, server-roles should be used.
To verify that S3 & CloudFront are working correctly.
deploy/[stack]/[file].tpl.yml
At the start of this project we understood that our production image would be managed by environment variables. These variables would change the behaviour of our image, rendering a single image useful in development, staging and demo environments, in addition to production.
We believe that thinking in this way, allows the team to reduce complexities in our application. Making an image reusable in this way presents us with a challenge; we must introduce variables into the image in a highly dynamic way.
Ergo, we were presented with the following challenge to introduce dynamism:
Considering our goal to reduce complexity, we opted to use tools already available in the native scripting language. Our intention is to find/replace environment variables using shells' envsubst
command.
To achieve this, we create YAML files denoted as templates [file].tpl.yml
, ones to house our variable names.
In our workflow file located in .github/workflows/deploy.yml
we inject environment variables.
We find this approach is simple, highly readable and portable, and considering our CI/CD image build and deploy takes just 1 minute 20 seconds to reach development, and then just 10 seconds to deploy across other stacks is testament to the impact our goal has on performance.