Open clux opened 5 years ago
Exploring this in https://github.com/kube-rs/version-rs/pull/4 as a way to use muslrust with multi-stage builds without needing cargo-chef
, if it works out, I'll port some docs into here.
This type of multi-stage dockerfile will work well with DOCKER_BUILDKIT=1
:
FROM clux/muslrust:stable AS builder
COPY Cargo.* .
COPY version.rs version.rs
RUN --mount=type=cache,target=/volume/target \
--mount=type=cache,target=/root/.cargo/registry \
cargo build --release --bin version && \
mv /volume/target/x86_64-unknown-linux-musl/release/version .
FROM gcr.io/distroless/static:nonroot
COPY --from=builder --chown=nonroot:nonroot /volume/version /app/
EXPOSE 8080
ENTRYPOINT ["/app/version"]
and caching works great when developing locally. however github actions' ci strategies only seems to do layer caching (and not save the cache directories) causing full rebuilds on ci. maybe there are some tricks to get this cached as well, but gave up for now.
Ah. It's not supported: https://github.com/moby/buildkit/issues/1673 and https://github.com/docker/buildx/issues/399. Seems actually derided that it's even asked for lol.
ok https://github.com/moby/buildkit/issues/1512 seems a little more positive at least
Hi just wanted to chime in that I tested a setup like this and locally I still need to rebuild all the deps when I use buildkit's caching. Was that the case for you as well when you tested it? My understanding is that this is one of the things that cargo-chef allows you to bypass.
No, I can actually avoid rebuilding deps and untouched files without cargo-chef:
You can test it out in version-rs with:
DOCKER_BUILDKIT=1 docker build . # full build first time
# edit version.rs
DOCKER_BUILDKIT=1 docker build . # only compiles main executable and links
cargo-chef
effectively splatters the relevant cache across docker layers that CI systems like circle/github generally know how to cache, whereas the buildkit cache ends up somewhere in an internal system path that's hard to cache. Works locally, but not on CI :(
Hmm, ok. I guess I did something wrong in my setup when testing. The question then becomes if it is enough to just cache /var/lib/docker/buildkit or something more fancy needs to happen using buildx for example :/ I guess it needs some investigation
I ended up doing this based on an article I found. https://medium.com/titansoft-engineering/docker-build-cache-sharing-on-multi-hosts-with-buildkit-and-buildx-eb8f7005918e
It seems to work but I still get the feeling that it sometimes does a complete rebuild but I haven't been able to pin-point the exact issue or when it triggers.
I will continue working on it when I have time over but it seems promising :)
#!/usr/bin/env bash
ECR_REPO=<.....>
IMAGE_TAG=$ECR_REPO:latest
# AWS ECR does not support the builidx cache format, use a private docker hub repo instead
CACHE_TAG=<docker hub repo>:buildx-cache
docker buildx build \
-t $IMAGE_TAG \
-f ./Dockerfile \
--cache-from=type=registry,ref=$CACHE_TAG \
--cache-to=type=registry,ref=$CACHE_TAG,mode=max \
--push \
.```
Thank you, if you find a setup that works that would be super helpful!
btw I think the complete rebuild happens when the github cache reaches 10G and it starts cycling away layers.
Ah no, I have only tested it locally, also the cache is stored on docker hub in this case. I will look at testing it on Github when I have a night over to play with it.
Since it's annoying to keep not encouraging multi-stage builds because you cannot reuse a build cache effectively (unless you use cargo-chef in like 4 layers), we should try to get buildkit to cache directly.
Circleci now has orbs, and evars.