Closed bonham000 closed 4 years ago
- and I think are just not strictly necessary to run against PRs
So when would they get run?
...?
They will still run in the deploy pipeline to complete full checks before deploying the app
as mentioned above.
It's really only omitting the Cypress tests but that also always the workflow to omit the client build which also takes a long time, so it reduces the workflow time by more than half.
It does open the possibility that a PR could be green but then fail in the deploy pipeline, but I think this is not very likely, and actually it already can happen because the Cypress tests are somewhat unreliable. So it is a tradeoff but I feel okay about that tradeoff if it speeds up PR workflows this much.
Also, running different levels of testing at different stages of dev to production process is fairly common.
I don't think there are any cache invalidation issues.
I tested the install caching step a few times and it appears to rebuild correctly if any dependency files change.
Currently, all the packages are built with a single command: yarn build
. This builds common/
and then builds the other packages in parallel with Lerna. If we wanted to separate and cache the client build step, we would need to create a separate stage which builds the common/
and then the client/
packages. This stage would need to depend on those packages as inputs, and only rebuild when their code changes. That would then need to be followed by another stage which builds the other packages (which I don't think would be worth caching because those builds are all very fast).
All of that is possible, but it would turn one command into a much more complex multi-stage build process.
What's more is that any changes to the course material would trigger rebuilds, since the courses are in the common/
package. That means most PRs would trigger rebuilds, since we are most likely to change the course material or the client/
or common/
packages, which might negate the benefit of trying to add caching. Trying to exclude the course material would add more complexity to the docker build stage, and it also may not make sense.
So... all of that is to say I started going down that road and decided the complexity versus simply yarn build
is not worth it. In addition, I think it would only save around ~5 minutes or so. That's when I arrived at the idea of just omitting the client build altogether for PR workflows, which I found I liked based on the 3x time improvement, even if it means potentially breaking the deploy workflow occasionally (if this becomes a major problem I'm happy to revisit).
This PR:
package.json
andyarn.lock
files using this docker image cache resource, which would help resolve https://github.com/pairwise-tech/pairwise/issues/3. This will try to cache docker build outputs in the GitHub container package registry for this repo. Sadly, this won't speed up the workflows that much... but should save about 5 minutes or so from each test and deploy workflow. The next major slow step is theclient
build, but I don't feel particularly ambitious about trying to add caching for this, and I had the following idea to improve speed:client/
build and the Cypress tests. These take the most time (after the initial install step - which is now cached), and I think are just not strictly necessary to run against PRs, especially now since we are going to be mostly modifying content. They will still run in the deploy pipeline to complete full checks before deploying the app, however, omitting them from the PR test workflow should speed up the turnaround time for PR tests dramatically. And, this keeps all unit tests (including linting and type checking) for all packages and running thee2e
test suite in the PR test workflow which still provides substantial test coverage. Doing this drops the test workflow time to less than 10 minutes.TL;DR: