Some things I noticed, not all of them may require fixing:
The current CI builds the Docker image four times in total, for checking and testing the Python and JS. These are all parallel steps, so it's probably not a big deal. The checks are run in Docker, which may or may not be better than running outside of Docker (like we do for job-server).
We compress the Docker image with gzip, when we could use zstd.
The deploy has the same lack of correct concurrency lock, that job-server's CI did. It's locking on the deploy step, not the whole workflow, which means two deploys at around the same time could run out of order, if the second run reaches deploy before the first run.
We don't build the Docker image for job-server's analogous checks; we run them on the code directly. Whether that's better or not, I'm not completely sure.
Some things I noticed, not all of them may require fixing:
deploy
step, not the whole workflow, which means two deploys at around the same time could run out of order, if the second run reachesdeploy
before the first run.