Open nilsvu opened 3 years ago
I'm now trying to do something similar to this, but not exactly the same.
My idea is to have pre-built executables in the container that people can run without having to worry about building spectre. However, this makes the image fairly large, so we wouldn't want to use this image for CI. Also the image with pre-built executables doesn't need all the extra compilers that are needed for CI. Only one is needed. Here are the different tags I propose for editing the current Dockerfile:
sxscollaboration/spectre:base
- FROM ubuntu:20.04 AS base
sxscollaboration/spectre:ci
- FROM base AS ci
sxscollaboration/spectre:visualize
- FROM base AS visualize (note that this is based off base
, not ci
)
sxscollaboration/spectre:demo
- FROM visualize AS demo
Regarding @nilsvu's suggestions, I guess we'd only have to worry about when somebody changed the base
or ci
parts of the Dockerfile, as those are what GitHub CI would need. Then, for the demo
tag, this image could be updated every release so that we maintain up-to-date executables in the container.
Thoughts on this would be appreciated. I'm going to test this locally and on my personal DockerHub to see how it works out.
From preliminary tests from building on my local computer, the current image on sxscollaboration/spectrebuildenv
is ~8.5G. Here are the sizes for each of the images I have (as of 8/17/22):
knelli/spectre:base
= 3.4Gknelli/spectre:ci
= 6.2Gknelli/spectre:visualize
= 6.0Gknelli/spectre:demo
= 6.7GAlready these are individually smaller than our current image which I think is really good. Especially base
.
I think this sounds great. Having a demo container with a spectre build and ParaView installed would be very useful for tutorials. A few questions:
ci
container can be smaller than our current one? My understanding is that it should be identical.demo
container on top of what we currently have, building it automatically at every release? Then we can automate the dev container build process in a separate PR before we split out the base
and vis
containers?vis
features on CI as well to test visualization scripts, and eventually to test in-situ visualization. I see your point in trying to make the demo container smaller though. Any ideas how to accomplish this? Edit: I believe this is possible with multi-stage Docker builds (https://docs.docker.com/develop/develop-images/multistage-build/). You would install ParaView etc in the vis
container, and for the ci
container you selectively copy over the build artifacts from vis
that you need (or the other way round).ci
container so that probably has something to do with it. But it seems strange that those two things would add so much.dev
(formerly base
, just to keep naming similar to our develop branch), ci
, and demo
. The vis
image only had Paraview on top of base
and didn't really seem like it was needed, so I just include Paraview in demo
now.
dev
and ci
images using some special GitHub actions when somebody changes the Dockerfile. However, the conclusion I came to was that it's doable, but is very complicated and error prone and probably wouldn't really save us time for how seldom we change the container. So my plan was to still do this "Docker dance" as you say. It's slightly annoying, but I imagine will be much less annoying than trying to a fix a very complicated workflow in the future.demo
image is actually fairly straightforward. Just add a new step at the end of the Release workflow that builds the new demo
image and pushes it to DockerHub using some secrets.Quick note: I suggest you put the demo container build in the PostRelease.yaml
workflow. It runs after the release is published, and it's ok if it runs for a while.
Feature request:
It would be nice if CI could build and push the Docker container, without disrupting other CI runs or user environments. Currently we have to do the "Docker dance": When a PR updates the Dockerfile, a core dev builds the container and pushes it to
sxscollaboration/spectrebuildenv:latest
. This may break all other PRs and user envs until the PR that adjusts to the new container has been merged and rebased-on.Here's a wish list how I would like updating the container to work instead:
This is a possible procedure to achieve the above:
Make note of the new container tag. It's the commit hash of the latest commit that edits the container. In a follow-up commit that requires the new container, update the container tag in these places in the repo:
Metadata.yaml
Tests.yaml
.devcontainer.json
You will only be able to merge the PR that edits the Dockerfile once these places refer to the new tag.
I think this would work, but I haven't figured out how to handle permissions yet. When the CI workflow is triggered by the PR that updates the Dockerfile, it doesn't have access to the DockerHub secrets. So it can't push the new container, nor should it publish anything without approval. But the container needs to get published before the PR gets merged, so the PR can include a commit that upgrades to the new container, and run tests on it. We can wrap the DockerHub secrets in a GitHub environment that the action requests access to. Then maybe we can trigger the action from every PR that edits the Dockerfile, and only approve it when we want the image published. Or we trigger the build-and-push with a special PR comment. Any suggestions are welcome here.