A Workspace harness is a way to ship files to a project without being part of the project.
In this repository are a set of harnesses that have been created for the PHP language, reducing the maintenance overhead of the individual harnesses greatly. This is due to the "base" PHP harness being used as a common set of templates.
Each framework will fully override a base harness file if differing behaviour is required.
A developer for a project can follow these steps to upgrade their harness version:
workspace.yml
.workspace.yml
harness version (usually line 2 or 3) to the new tagged version.rm -rf .my127ws
ws harness download
tools/workspace/
.
ws harness prepare
.my127ws/application/skeleton/
folder to the project root:
_twig
folder or named *.twig
) is missing, copy it to the projectworkspace.yml
to the harness's harness.yml
and harness/attributes/*.yml
.
ws harness update existing
To keep your existing database:
`ws harness update existing`
To do a fresh installation:
ws harness update fresh
12. Ask for someone else to test the pull request.
13. Once the pull request has been merged to the default branch of the project, remind the project team to apply the changes with the `ws harness update existing` or `ws harness update fresh` commands.
## Helm charts
Each harness deploys:
* A "console" pod for running one-off commands
* A NGINX/PHP-FPM "webapp" pod for running the php-based application and serving web requests
* A service to route to the "webapp" pods
* An ingress definition to route via the "webapp" service
* Optionally, a "cron" pod for running cronjobs
* Optionally, elasticsearch, mysql, postgres, redis for supporting services
### Memory
The memory requests for pods have been deliberately set to be the same as the limits.
This is to avoid nodes going to "NotReady" status due to dockerd/containerd/kubelet being killed by the kernel.
An example:
Requesting 10Mi of memory but allowing the pod to spike to 1024Mi means that kubernetes will schedule the pod it onto a
node with 10Mi allocatable memory left. It doesn't consider the limits at all when scheduling pods.
As soon as something in the pod starts using more than 10Mi when the node is already at capacity, kubelet attempts to
kill processes in the container to get back down to 10Mi.
Sometimes kubelet does not manage to kick in fast enough and the Linux kernel's Out Of Memory (OOM) killer kicks in
instead. Whilst core kubernetes processes such as dockerd, containerd and kubelet have an extremely low priority for
the OOMKiller, sometimes the kernel decides to kill one of these core processes anyway as it would free up the most
memory, leading to the node having issues.
## Testing
The final harness version for each of the frameworks is put together by the [build script](./build) into a "dist"
folder. This is used for testing in Jenkins.
### Quality check
We run shellcheck and hadolint across shell scripts and Dockerfiles. These can be run via:
```bash
./quality
The ./test
script described below also runs these quality checks against rendered twig templates in
tmp-test-<framework>-<mode>/.my127ws/
as used in a test project.
If you have access to the secret key needed to decrypt the src/.ci/*/workspace.yml encrypted attributes, you can run the following to test the given framework in the given mode like Jenkins does:
./build && ./test <framework> <static|dynamic> [mutagen]
Running with the environment variable TEARDOWN_ENVIRONMENT=no
will keep the environment running so you can
debug a failure.
If you don't have access to the key, you can still bring up a test environment:
./build
mkdir tests
cd tests
ws create <framework>-test inviqa/<framework> --no-install
./dist/<framework>
dir to <framework>-test/.my127ws
directory
cp -pR ../dist/harness-<framework>/ <framework>-test/.my127ws/
<framework>-test
cd `<framework>-test`
<framework>-test
:
echo "attribute('mutagen'): no" >> workspace.override.yml
pipeline
mode to activate static
mode
MY127WS_ENV=pipeline ws install
Once a Github release has been created, a Github Action will build and create archives of each harness and upload them to the release.
We use Gitlab release notes to generate and store changelogs.
When ready to tag a release, make a new branch from the 2.0.x
branch for the changelog entries:
Generate release notes
Other Changes
section, examine the Pull Requests
and assign each pull request either a enhancement
label for a new feature, bug
for a bugfix or deprecated
for
a deprecation.harness-*
label.sed -i '' s/1\.6\.0/2.0.0/ README.md src/*/README.md src/*/docs/*.md src/*/docs/*/*.md
When you're ready to release:
Generate release notes
harnesses.json
If the next release does not make sense to be in the current 2.0.x branch:
git checkout -b 2.1.x
grep -FR '2.0.x' . | grep -v dist/
grep -FR '2.0.x' . | grep -v dist/
# Edit resulting files
grep -FR '2.0.0' README.md
grep -FR '2.0.0' README.md
# Edit resulting files
git add -p
git commit
git push origin -u HEAD