Open bozaro opened 3 years ago
I make more investigation: looks like cache is corrupted before upload stage. I comapre bad and good cache data (https://drive.google.com/drive/folders/1hzMWF_qBANvFmf3BeuQKe7KToGUOJp72?usp=sharing) and found litte difference.
cache_good
contains two layers for RUN
step:
sha256:48ffe3fe97d4a7f3ad3e09f613161e6f1a4f6b836751f0f0c93c8fd5ea92064a
(linux/arm64)sha256:52af553f4ee5a60ea433453c95375e457f139988034d49244afcc64b08e3331e
(linux/amd64)cache_bad
contains only one layer for RUN
step:
sha256:52af553f4ee5a60ea433453c95375e457f139988034d49244afcc64b08e3331e
(linux/amd64)graph
good --> g_38cc3[38cc3: FROM arm64]
good --> g_b71f9[b71f9: FROM amd64]
g_38cc3 --> g_48ffe[48ffe: RUN]
g_b71f9 --> g_52af5[52af5: RUN]
g_b71f9 --> g_7c10d[7c10d: COPY]
g_52af5 --> g_7c10d
g_38cc3 --> g_7c10d
g_48ffe --> g_7c10d
bad --> b_38cc3[38cc3: FROM arm64]
bad --> b_b71f9[b71f9: FROM amd64]
b_b71f9 --> b_2b352[2b352: COPY]
b_38cc3 --> b_2b352
b_b71f9 --> b_52af5[52af5: RUN]
b_52af5 --> b_ca41e[ca41e: COPY]
I modified script to check clean cache generation (without --cache-from
):
#!/bin/bash
REGISTRY=registry.example.net/test-docker/example
# Recreate builder for clear local cache
docker buildx rm cachebug || true
docker buildx create --name cachebug --driver docker-container
docker buildx inspect cachebug --bootstrap
# Create some changed file
date > changed.txt
# Run with cache-to only
docker buildx build \
--builder cachebug \
--cache-to type=registry,ref=$REGISTRY:buildcache,mode=max \
--platform linux/amd64 \
--platform linux/arm64 \
.
# Recreate builder for clear local cache
docker buildx rm cachebug || true
docker buildx create --name cachebug --driver docker-container
docker buildx inspect cachebug --bootstrap
# Create some changed file
date > changed.txt
# Run with cache-from only
docker buildx build \
--builder cachebug \
--cache-from type=registry,ref=$REGISTRY:buildcache \
--platform linux/amd64 \
--platform linux/arm64 \
.
And this issue is still reproduced randomly.
The problem seems to occur when RUN
step produce same BLOB for differ platforms (started and /proc
is created at the same second) and COPY
step product differ BLOBs (started and /opt
is created at the differ second).
So, I modify scripts to dramatically increase the reproducing the problem.
Dockerfile:
FROM busybox AS stage-0
FROM stage-0 AS stage-1
RUN echo "Hello, world!!!"; if [ "$(arch)" = "x86_64" ]; then sleep 2; fi
COPY changed.txt /opt/changed.txt
FROM busybox
COPY --from=stage-1 /opt/changed.txt /opt/copy.txt
Test script:
#!/bin/bash
REGISTRY=registry.example.net/test-docker/example
# Recreate builder for clear local cache
docker buildx rm cachebug || true
docker buildx create --name cachebug --driver docker-container
docker buildx inspect cachebug --bootstrap
# Create some changed file
date > changed.txt
# Build stage-0 to pull busybox image
docker buildx build \
--builder cachebug \
--platform linux/amd64 \
--platform linux/arm64 \
--target stage-0 \
.
# Run with cache-to only
docker buildx build \
--builder cachebug \
--cache-to type=registry,ref=$REGISTRY:buildcache,mode=max \
--platform linux/amd64 \
--platform linux/arm64 \
.
# Recreate builder for clear local cache
docker buildx rm cachebug || true
docker buildx create --name cachebug --driver docker-container
docker buildx inspect cachebug --bootstrap
# Create some changed file
date > changed.txt
# Run with cache-from only
docker buildx build \
--builder cachebug \
--cache-from type=registry,ref=$REGISTRY:buildcache \
--platform linux/amd64 \
--platform linux/arm64 \
.
I try save stage-1
, latest
and buildcache
and recover lost cache path (https://drive.google.com/drive/u/0/folders/1pgE0BIQo6PlldnWSDUBtP99xcSvtQAlN).
graph
style copy_stage_amd64 stroke-dasharray: 5 5
root --> amd64[b71f9: FROM amd64]
root --> arm64[38cc3: FROM arm64]
amd64 --> copy_latest_amd64[62dd4: COPY latest, amd64]
amd64 -.-> run[1bb1a: RUN]
arm64 --> run
arm64 --> copy_latest_arm64[f6510: COPY latest, arm64]
run -.-> copy_stage_amd64[01cda: COPY stage-1, amd64]
run --> copy_stage_arm64[f253d: COPY stage-1, arm64]
In this case I got single layer for RUN
command and differ layers for COPY
command.
Dashed links and layers was lost in cache.
I independently encountered some bugs around the parallel creation of blobs for different platforms and have a fix out for PR here: https://github.com/moby/buildkit/pull/2273
Have not tried the reproducer here to see if it fixes the problem, but may be worth a try if anyone has time.
I refactor Dockerfile to produce stable layer on RUN stage:
FROM alpine AS base
# RUN command always produce layer with digest `sha256:bcca0bb7...`
RUN echo "Hello, world!!!"; if [ "$(arch)" = "x86_64" ]; then sleep 2; fi; touch -d 2021-01-01 -t 00:00:00 /foo.txt
# COPY command produce differ layer per platform because /opt has differ change time
COPY changed.txt /opt/changed.txt
FROM alpine
COPY --from=base /opt/changed.txt /opt/copy.txt
And remove stage-0
build from script.
But docker buildx build
still produce cache with incomplete layers graph randomly (I got 53 failures on 100 runs).
I independently encountered some bugs around the parallel creation of blobs for different platforms and have a fix out for PR here: #2273
Have not tried the reproducer here to see if it fixes the problem, but may be worth a try if anyone has time.
I have build locally #2273 and looks like it not fix this issue.
I have build locally #2273 and looks like it not fix this issue.
Ah okay, good to know, thanks for giving it a try
I created script to one click reproduce: https://github.com/bozaro/buildkit-2279
I recheck this issue state by https://github.com/bozaro/buildkit-2279 script and now it not flaky. It stable failed:
e134e8a1d9522d24917077a359c77e7428586adb7be11d16d4c886902c2ecb3b
Wait for service on endpoint: http://localhost:5001
bd194edf668ff7df00eedecb5f9b346af23405cc4e7bf0ce12f91b6aefc59da9
Wait for service on endpoint: http://localhost:5002
Check is build for multiple platform supported...
Run tests...
[1] FAIL
...
[100] FAIL
Cleanup docker images...
cachebug_registry
cachebug_proxy
Show statistics...
FAILED 100 OF 100
Given the resolution of #1981 , could you try upgrading your buildx plugin to the 0.8.2 version?
Has this issue proceeded any further or is there still no known fix? How are people working around this?
my fix was simply to not use multi-stage builds, but that of course is not a useful solution for everybody
@n1ngu I tried upgrading and the issue persists, multi-stage is broken
Ok @ghost-of-montevideo , just check you are not actually under the influence of the confusingly similar #2274
I was under the impression that the present issue is fixed and only affects the outdated buildkit vendored out-of-the-box within docker, as I was unable to reproduce it if I was using an updated buildkit plugin and the docker-container build driver. Unlike the present issue, the reproduction of #2274 is consistent for me.
@n1ngu The reproduction in https://github.com/bozaro/buildkit-2279 shows that with buildx version 0.9.1 and moby/buildkit:0.10.4
the failure rate is 100% on this issue. Can you show how you were able to validate that this is working, as stated in your last comment?
@n1ngu The reproduction in https://github.com/bozaro/buildkit-2279 shows that with buildx version 0.9.1 and
moby/buildkit:0.10.4
the failure rate is 100% on this issue. Can you show how you were able to validate that this is working, as stated in your last comment?
@benlangfeld I can't access the building environment I used at the time, but I now think the culprit of the issue would be multi-stage builds, as @ghost-of-montevideo pointed out. All my builds were single-stage so it worked right. I just feel this isn't highlighted enough in the issue description.
I just feel this isn't highlighted enough in the issue description.
Initial issue message has "Steps to reproduce" section. What additional highlighting do you expect?
Initial issue message has "Steps to reproduce" section. What additional highlighting do you expect?
Given the context of #1981 and #2274, I'd expect the "multi-stage build triggers the problem, single-stage build works fine" bit of information (not literally) to be both in the title and the description, not only in the reproduction source code.
Is there a solution to this yet? I am also using multistage builds and encountering this issue.
Is there anybody working on this issue, and is there any way others can help? Either this or the related https://github.com/moby/buildkit/issues/2274
Im also happy to help a bit if there is a way we can contribute, let us know
if there is a way we can contribute, let us know
You can debug the code, find where is the root of the issue, fix it and submit a PR.
Spent a while on this issue and could not find a great resolution using buildkit, and there's lots of unsolved issues about this across the web. After seeing https://github.com/moby/buildkit/issues/1981#issuecomment-1516704608 and another on SO, I switched our GitLab Cloud CI/CD pipelines to use buildah instead of buildkit and caching is working well now.
It's (almost) a drop-in replacement, and using overlay2 as the storage driver, the build performance (caching disabled) seems to be the same as with buildkit.
Here's a sample of what that could look like in a .gitlab-ci.yml file:
.build_template: &build_template
image: quay.io/buildah/stable
before_script:
- buildah login [Login Params] [Registry URL]
script:
- buildah build
--cache-from registry.example.com/myproject/myapp
--tag registry.example.com/myproject/myapp:${CI_COMMIT_SHA}
--cache-to registry.example.com/myproject/myapp
--layers
--storage-driver overlay2
-f $DOCKERFILE_PATH $BUILD_CONTEXT_PATH
- buildah push registry.example.com/myproject/myapp:${CI_COMMIT_SHA}
I recheck this issue state by https://github.com/bozaro/buildkit-2279 script and now it not flaky. It stable failed:
e134e8a1d9522d24917077a359c77e7428586adb7be11d16d4c886902c2ecb3b Wait for service on endpoint: http://localhost:5001 bd194edf668ff7df00eedecb5f9b346af23405cc4e7bf0ce12f91b6aefc59da9 Wait for service on endpoint: http://localhost:5002 Check is build for multiple platform supported... Run tests... [1] FAIL ... [100] FAIL Cleanup docker images... cachebug_registry cachebug_proxy Show statistics... FAILED 100 OF 100
I tried your script and I only got 100 of 100 when I exhausted the number of requests to docker hub, where you logged in when you tried?
hi guys, any update on this
Experiencing the same.
Tried S3 and registry cache backends.
Two builds run in subsequent CodeBuild projects, which use exactly the same environment deployed via IaC. The only difference is in some build flags, e.g. stage target, and some ARGs.
First build re-uses all of the cache, and build finishes very fast.
Second build, re-uses layer cache of the first few layers, and then ignores the cache of the last command.
I did inspect layers thoroughly using dive and the layer preceding the one that is not getting cached has exactly the same digest across runs.
buildx: 0.17.1
Docker version 26.1.4, build 5650f9b
Hey @moltar , your experience sounds much like mine. I've been using docker bake with multistage Dockerfiles, and it's been frustrating that cache restoration is quite flaky, even when simply using either the local file or the remote S3 cache backend. Even when all the build inputs and contexts are deterministicly held constant, caching seems to be hit or miss, and often only partial.
While sifting through related tickets, I believe the main culprit is the issue discussed here:
Pulled https://github.com/bozaro/buildkit-2279 and ran test.sh
(thanks @bozaro).
...
Cleanup docker images...
cachebug_registry
cachebug_proxy
Show statistics...
FAILED 54 OF 100
Hmm, I can't seem to replicate that failure rate on my workstation, (although I commented out the arm64 bits):
Although for more complex multistage builds, like I use internally, I certainly can. I'll have to make a similar replicator.
Version information:
Steps to reproduce
Create
Dockerfile
:Run script like (REGISTRY shoud be replaced by actual value):
What I see: When I run the above script multiple times, step
RUN echo "Hello, world!!!"
fails cache roundly every second time for one of platform (I have not seen the problem with the cache at the same time on all platforms).For example:
Update (2021-08-18)
Repository to reproduce issue: https://github.com/bozaro/buildkit-2279 (simply checkout and run
./test.sh
).