Open phmarek opened 3 weeks ago
I didn't even realize there was a Dockerfile here. I think the problem is that that no contributor seems to use it regularly so it got neglected. Thanks for pointing it out.
Actually, it seems to have been built 2 weeks ago. This matches with this Docker image, but I'm not sure if it is the actual image that we generate in Github Actions here. Was this the image that you were testing?
Ok, on closer inspection the Action seems to only build the "latest" tag (which builds off of the master
branch). The pinned releases (like "0.9.4", which is probably the image you tested) are only created once when the tag is pushed to the repo. This is a bit problematic, because the latest pinned version ("0.9.4") was generated 8 months ago.
We could just regenerate all Docker image tags on all pushes to master
, but I think that would get out of hand pretty fast. Probably better to use the schedule
Github Action trigger to rebuild all release tags once a week or such.
@gruhn @panasenco Pinging because you two seem to be the ones that have worked the most on the Docker images.
Haven't used Scryer in a while, but will take a stab at this.
There are a couple of things going on here. First, the Dockerfile needs to be updated and tested locally to make sure Scryer still starts. @phmarek , is there a reason to use unstable or testing? Even stable-slim doesn't have any critical vulnerabilities.
Then, to @bakaq's point, the base image of at least the latest N tags could be updated on a regular basis using a scheduled job.
I just tried and I remember now that there's a libssl error I documented in the Dockerfile, though it's kind of cryptic, even to me.
Probably relevant: https://github.com/mthom/scryer-prolog/pull/2013
Well there we go, thanks @bakaq! Looks like we pinned it to be old on purpose :sweat_smile:
Nah, we're good. I just wrote a new version of the Dockerfile that keeps the rust builder and executable versions consistent, and it works now: https://github.com/panasenco/scryer-prolog/blob/master/Dockerfile
Next, the CI update...
@bakaq , I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches? If not, would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest
?
I just don't want to write something crazy that no one but me can maintain. :sweat_smile:
Also occurs to me that even if we found a way to rebuild the previous tags, they're all using bullseye-slim
as the base, which has reached end-of-life and has critical CVEs, so the security-conscious won't want to use them anyway,
[...] I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches?
I'm not very familiar with Docker, so I guess I won't be able to help a lot here. I guess a way would be to find which tags are published in the Docker repository and only update them, because then I think we could just delete tags to deprecate them, but I have no idea how you could do that in a way that is not incredibly obscure.
[...] would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest?
Probably not, because latest
builds against master
, which isn't stable and so is not really fitting for a lot of use cases, including the ones where security matters the most.
Also occurs to me that even if we found a way to rebuild the previous tags, they're all using bullseye-slim as the base, which has reached end-of-life and has critical CVEs, so the security-conscious won't want to use them anyway,
I can't think of a way to solve this that doesn't involve making branches for the old releases to fix this. But also, I don't think bullseye-slim
is deprecated, it was last pushed 12 days ago and is still listed in the image overview. They probably just haven't updated it yet.
I can't find any easy way to do something like "rebuild the latest N tags". Can you think of any approaches?
I'm also looking into this now and it is annoying. I guess we could have a scheduled job that
Dockerfile
with the one from master
I can imagine that this will break often though. We would have to maintain a Dockerfile that's compatible with the repos states at each version tag.
I just don't want to write something crazy that no one but me can maintain. 😅
Don't worry, I'm also committed to maintain at least the docker/CI business 😉
[...] would it be acceptable to keep the existing tags frozen and ask security-conscious folks to use latest?
Probably not, because latest builds against master, which isn't stable and so is not really fitting for a lot of use cases, including the ones where security matters the most.
I'm starting to lean towards the lazy way out as well :S Maybe the maintenance overhead is not worth it, for a Docker image that's rarely used. I would even suggest to remove all those version-tag based images and only keep the latest
one. At least nobody uses these outdated images then with a false sense of security. That's definitely not enterprise level service, but for people who seriously want to use Scryer in production, it's not hard to create their own Docker image. For me at least, the Docker image was just a quick and easy way to play with Scryer, without also having to install the entire Rust tool chain (not even true anymore with Nix package, WASM build, etc).
How about producing a minimal image, ie. one that only contains the scryer-prolog
binary, libc6
, libnss
, and as few other libraries as possible?
Most of the stuff listed in the vulnerability report (perl-base
, libsystemd0
, e2fsprogs
, ...) are not needed in a Scryer image; if the image contains only the bare minimum, updates will be required less often, too.
Basically something like this:
# Installation as now
RUN mkdir /image
RUN rsync -vaR \
/usr/local/bin/scryer-prolog \
/usr/lib64/... and other files \
/image/
# Last stage, return actual image
FROM scratch
COPY --from=0 /image/ /
CMD ["/usr/local/bin/scryer-prolog", "--no-add-history"]
That's what we're doing, see https://gitlab.opencode.de/brz/containerplattform/minimal-image.
Another idea would be to remove all unnecessary packages, but that means fighting dpkg
s notion of essential
packages and is (in my experience) not faster - it just leaves more stuff behind, as the granularity is higher (packages instead of files).
If there's a list of required files, and from that a list of required packages, a periodic job can check whether one of these changed - and only then run the GitHub build process.
Doing that for latest
and the last release (which could be hardcoded in the script) should be good enough, IMO.
Oh, sounds like this got auto-closed when #2647 was merged. @mthom can you re-open?
How about producing a minimal image [...] if the image contains only the bare minimum, updates will be required less often, too.
I agree that's a good idea in general. But we would still need some process to update the base image of old versions. It wouldn't be that much work to do it manually once in a while, but @mthom owns the DockerHub credentials. So, either we have to ask him to do it for us, or he needs to give one of us permissions. I think both is not a fair ask.
So it should be some automatic process. But I can't think of a way to make this run trivially stable long-term. For example, say we go with the approach I suggested:
master
Then at some point a dependency is updated in the Dockerfile, which is incompatible with version 0.9.2 or whatever. Then one month later when the scheduled pipeline runs, the image build fails. This stuff is just annoying to fix. It's not just: find problem -> open PR -> merge -> done. Maybe we have to create a branch off-of the the version tag and adjust the Dockerfile there. Maybe we have to re-push the Git version tag, to be align it with that change (have ask @mthom do that for us). Then we probably also have to ask @mthom to manually re-push the docker image. That's all not a big deal. But the size of the deal should be proportional to the value, that the Docker image provides. And I suspect not many people care.
If there's a list of required files, and from that a list of required packages, a periodic job can check whether one of these changed - and only then run the GitHub build process. Doing that for latest and the last release (which could be hardcoded in the script) should be good enough, IMO.
But where do we maintain this list of files? Usually you would just spell it out in the Dockerfile. When something changes, we change the Dockerfile. But where do we maintain this information for past versions of Scryer. We could have a dedicated branch for each version as @bakaq suggested. But now we are starting to bend the repo just to accommodate the Docker image. If there is a simple process to just keep the base image of the last release up to date, it's probably not much more work to keep the base image of all prior releases up to date. I'm thinking: some tool like Dependabot but I haven't found anything that really fits the use case.
Please use Debian Testing or Unstable when building the Docker image, and/or refresh them more often. Thanks!