Open audgster opened 1 year ago
This looks to be the issue reported and fixed upstream in buildkit in https://github.com/moby/buildkit/pull/3858.
This patch is available on moby/buildkit:master
, and also in the latest release candidate moby/buildkit:v0.12.0-rc1
.
You can try the latest build by creating a new docker-container
builder, and configuring the image
option to one of the above images.
Alternatively, you can just wait for the release of v0.12, which will be vendored into docker at some point soon!
I'm still having these problems even trying to use moby/buildkit:master or moby/buildkit:v0.12.0-rc1 using latest docker-buildx plugin version: github.com/docker/buildx v0.11.2
You should be able to use https://github.com/moby/buildkit/releases/tag/v0.12.1 now.
Was the cache you're trying to use built using the new buildkit version? Does your dockerfile have a # syntax
comment? You may need to update that to the latest release of docker/dockerfile
as well.
You should be able to use https://github.com/moby/buildkit/releases/tag/v0.12.1 now.
Was the cache you're trying to use built using the new buildkit version? Does your dockerfile have a
# syntax
comment? You may need to update that to the latest release ofdocker/dockerfile
as well.
Thanks for the tip. I'm using this syntax-comment:
# syntax=docker/dockerfile-upstream:master
I've tried to use the new release. But it's throwing me this error:
ERROR [internal] booting buildkit
Using moby/buildkit:latest works without this error. And by the way I'm using this docker-buildx plugin version actually:
github.com/docker/buildx v0.11.2
This is my script to setup fresh buildx-instances:
#!/bin/bash
network='host'
buildkitVersion='0.12.1'
image="moby/buildkit:${buildkitVersion:-latest}"
echo "Pruning existing builder instances ...";
docker buildx ls | \
grep -Po '^[^ ]+' | \
tail -n+2 | \
grep -v 'default' | \
xargs -l1 -I{} docker builder rm --force {} \
;
echo
echo "Pruning caches ...";
docker buildx prune --all --force;
instance=$(
docker buildx create \
--use --bootstrap \
--driver docker-container \
--driver-opt network="${network:-host}" \
--driver-opt image="${image}" \
--buildkitd-flags '--allow-insecure-entitlement security.insecure' \
--config /etc/buildkit/buildkitd.toml
);
exit $?
Not sure if it is the same issue, but I observe very similar behavior when building on macOS with Apple silicon and subsequently building the same Dockerfile within a container on the same machine. After pruning the caches, I test with the following Dockerfile
FROM python:3.10.15-slim-bullseye
RUN mkdir /work
COPY ./requirements.txt /work
RUN pip install -r /work/requirements.txt
and docker build --tag outside .
- nothing cached here yet.
Then in the container with:
docker run -it -v ./:/data -v /var/run/docker.sock:/var/run/docker.sock --rm docker:27-cli docker build --tag inside /data
Everything up to the COPY
command is cached.
The COPY
(and as a result the second RUN
) are not cached within the container build, although, needless to say, none of the files have changed. If I prune the caches and change the Dockerfile e.g. to
FROM python:3.10.15-slim-bullseye
RUN mkdir /work
# COPY ./requirements.txt /work
#RUN pip install -r /work/requirements.txt
RUN pip install PyYaml
then everything is cached during the build inside the container as expected.
docker info
on the host (lines with paths redacted)
Client:
Version: 27.2.0
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.16.2-desktop.1
compose: Docker Compose (Docker Inc.)
Version: v2.29.2-desktop.2
WARNING: daemon is not using the default seccomp profile
debug: Get a shell into any image or container (Docker Inc.)
Version: 0.0.34
desktop: Docker Desktop commands (Alpha) (Docker Inc.)
Version: v0.0.15
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.2
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.25
feedback: Provide feedback, right in your terminal! (Docker Inc.)
Version: v1.0.5
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v1.3.0
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
scout: Docker Scout (Docker Inc.)
Version: v1.13.0
Server:
Containers: 3
Running: 2
Paused: 0
Stopped: 1
Images: 13
Server Version: 27.2.0
Storage Driver: overlayfs
driver-type: io.containerd.snapshotter.v1
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.10.4-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 16
Total Memory: 7.654GiB
Name: docker-desktop
ID: 9f56eec7-25e9-4997-b971-708e47e469e4
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false
and in the container:
Client:
Version: 27.3.1
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.17.1
Path: /usr/local/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.29.7
Path: /usr/local/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 13
Server Version: 27.2.0
Storage Driver: overlayfs
driver-type: io.containerd.snapshotter.v1
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version: 8fc6bcff51318944179630522a095cc9dbf9f353
runc version: v1.1.13-0-g58aa920
init version: de40ad0
Security Options:
seccomp
Profile: unconfined
cgroupns
Kernel Version: 6.10.4-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: aarch64
CPUs: 16
Total Memory: 7.654GiB
Name: docker-desktop
ID: 9f56eec7-25e9-4997-b971-708e47e469e4
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Labels:
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false
Contributing guidelines
I've found a bug and checked that ...
Description
COPY directives don't use inline caching when using an image cache built on a different architecture. If a cache is produced by docker engine on a linux/amd64 machine and used as the cache source by docker engine running on a linux/arm64 machine, COPY directives are not recognized as being cached and subsequent layers are re-built, even if both builds are targeting the same platform.
Expected behaviour
On Apple Sillicon run:
docker build --cache-to=type=inline -t <your docker repo here>/build-cache-image:arm64 --platform=linux/x86_64 .
docker push
On Intel silicon run:
docker build --cache-to=type=inline -t <your docker repo here>/build-cache-image:x86 --platform=linux/x86_64 .
docker push
On both machines, clear the build cache with
docker builder prune
Verify all caches are cleared withdocker system df
docker build --cache-from=<your docker repo here>/build-cache-image:arm64 -t final-image --platform=linux/x86_64 .
Since nothing has changed in your files, all layers should be cached:docker build --cache-from=<your docker repo here>/build-cache-image:x86 -t final-image --platform=linux/x86_64 .
Similarly since there are no changes to your files, all layers should be cached.Actual behaviour
When running either:
docker build --cache-from=<your docker repo here>/build-cache-image:x86 -t final-image --platform=linux/x86_64 .
on Apple silicondocker build --cache-from=<your docker repo here>/build-cache-image:arm64 -t final-image --platform=linux/x86_64 .
on an intel chip,RUN directives are properly cached, but COPY directive bust the cache and result in a rebuild of subsequent layers
Apple Silicon example:
docker build --cache-from=<your docker repo here>/build-cache-image:x86 -t final-image --platform=linux/x86_64 .
Intel CPU example:
docker build --cache-from=<your docker repo here>/build-cache-image:arm64 -t final-image --platform=linux/x86_64 .
Buildx version
Intel: github.com/docker/buildx v0.10.5 86bdced, Apple Silicon: github.com/docker/buildx v0.10.5 86bdced7766639d56baa4c7c449a4f6468490f87
Docker info
Intel:
Builders list
Intel:
Configuration
Dockerfile
Pipfile
Pipfile.lock
Build logs
No response
Additional info
I used
--chmod
andchown
to explicitly set permissions for the files in the image and I verified on both machines that the checksums of the files were the same:Apple Silicon:
Intel:
Codefiles are available here: https://gitlab.com/audrey.simonne/test-cross-platform-cache Cache images available here: https://hub.docker.com/repository/docker/asimval/build-cache-image/general