Open shykes opened 5 years ago
There should theoretically be a way to do it even for partial matches. A problem though is that there is no guarantee that the image is still in docker when the build finishes up. And if it is deleted before build completes you could get an error(on tag or on uploading layer on partial matches). So maybe needs an opt-in with a flag(at least until there isn't a special incremental load endpoint in the docker api).
Yes I agree, an opt-in flag would be best. Thanks.
I would like to add a data point and a reproducible example for this problem.
Dockerfile
Creating a builder with docker-container
driver:
docker buildx create --name buildx-default --driver docker-container --bootstrap
Building a bunch of big images with buildx-default
:
time bash -c 'for TAG in 2.8.0-gpu 2.7.1-gpu 2.7.0-gpu 2.6.0-gpu 2.4.3-gpu ; do docker buildx build --builder buildx-default --tag tf-test:${TAG} --build-arg TAG=${TAG} --load . ; done'
...
bash -c 132.93s user 29.20s system 190% cpu 1:25.31 total
Same but with default builder:
time bash -c 'for TAG in 2.8.0-gpu 2.7.1-gpu 2.7.0-gpu 2.6.0-gpu 2.4.3-gpu ; do docker buildx build --tag tf-test:${TAG} --build-arg TAG=${TAG} --load . ; done'
...
bash -c 0.34s user 0.20s system 7% cpu 7.535 total
That is a rather dramatic slow-down, especially when building many similar images. As far as I understand all this time is spent on serializing and deserializing data. Even if there's a hacky half-backed solution like an opt-in flag it would certainly be nice to have a way to optimize this
Our use case also suffers from the time it takes to export to OCI image format/sending tarball.
We end up sticking to DOCKER_BUILDKIT
:
DOCKER_BUILDKIT=1 docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
-t test .
[+] Building 1.3s (28/28) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 35B 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 0.8s
[...]
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:d40374998f58b1491b57be3336fdc2793 0.0s
=> => naming to docker.io/library/test 0.0s
=> exporting cache 0.0s
=> => preparing build cache for export 0.0s
Instead of using buildx
:
docker buildx build \
--cache-to type=inline \
--builder builder \
--load \
-t test .
[+] Building 33.5s (30/30) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 1.98kB 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 6.43kB 0.0s
=> resolve image config for docker.io/docker/dockerfile:1 1.6s
[...]
=> preparing layers for inline cache 0.1s
=> exporting to oci image format 26.9s
=> => exporting layers 0.0s
=> => exporting manifest sha256:9b04583c6b0681e05222c3b61e59 0.0s
=> => exporting config sha256:e58cf81d847d829a7a94d6cfa57b29 0.0s
=> => sending tarball 26.9s
=> importing to docker 0.4s
[...]
In my docker-compose project I'm getting "sending tarball" times of almost 2 minutes, when the entire build is cached. Makes the development experience so painful that I'm considering setting up the services outside of Docker to avoid this.
=> [docker-worker internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 190B 0.0s
=> [docker-worker internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [docker-worker internal] load metadata for docker.io/library/python:3.8 0.9s
=> [docker-backend internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 6.25kB 0.0s
=> [docker-backend internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [docker-backend internal] load metadata for docker.io/library/ubuntu:20.04 0.8s
=> [docker-backend internal] load metadata for docker.io/library/node:14.19-bullseye-slim 0.8s
=> [docker-worker 1/5] FROM docker.io/library/python:3.8@sha256:blah 0.0s
=> => resolve docker.io/library/python:3.8@sha256:blah 0.0s
=> [docker-worker internal] load build context 0.1s
=> => transferring context: 43.51kB 0.0s
=> [docker-backend gatsby 1/11] FROM docker.io/library/node:14.19-bullseye-slim@sha256:blah 0.0s
=> => resolve docker.io/library/node:14.19-bullseye-slim@sha256:blah 0.0s
=> [docker-backend internal] load build context 0.1s
=> => transferring context: 72.58kB 0.1s
=> [docker-backend with-secrets 1/6] FROM docker.io/library/ubuntu:20.04@sha256:blah 0.0s
=> => resolve docker.io/library/ubuntu:20.04@sha256:blah 0.0s
=> CACHED [docker-worker 2/5] 0.0s
=> CACHED [docker-worker 3/5] 0.0s
=> CACHED [docker-worker 4/5] 0.0s
=> CACHED [docker-worker 5/5] 0.0s
=> [docker-worker] exporting to oci image format 117.7s
=> => exporting layers 0.3s
=> => exporting manifest sha256:blah 0.0s
=> => exporting config sha256:blah 0.0s
=> => sending tarball 117.6s
In my docker-compose project I'm getting "sending tarball" times of almost 2 minutes, when the entire build is cached. Makes the development experience so painful that I'm considering setting up the services outside of Docker to avoid this.
=> [docker-worker internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 190B 0.0s => [docker-worker internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [docker-worker internal] load metadata for docker.io/library/python:3.8 0.9s => [docker-backend internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 6.25kB 0.0s => [docker-backend internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [docker-backend internal] load metadata for docker.io/library/ubuntu:20.04 0.8s => [docker-backend internal] load metadata for docker.io/library/node:14.19-bullseye-slim 0.8s => [docker-worker 1/5] FROM docker.io/library/python:3.8@sha256:blah 0.0s => => resolve docker.io/library/python:3.8@sha256:blah 0.0s => [docker-worker internal] load build context 0.1s => => transferring context: 43.51kB 0.0s => [docker-backend gatsby 1/11] FROM docker.io/library/node:14.19-bullseye-slim@sha256:blah 0.0s => => resolve docker.io/library/node:14.19-bullseye-slim@sha256:blah 0.0s => [docker-backend internal] load build context 0.1s => => transferring context: 72.58kB 0.1s => [docker-backend with-secrets 1/6] FROM docker.io/library/ubuntu:20.04@sha256:blah 0.0s => => resolve docker.io/library/ubuntu:20.04@sha256:blah 0.0s => CACHED [docker-worker 2/5] 0.0s => CACHED [docker-worker 3/5] 0.0s => CACHED [docker-worker 4/5] 0.0s => CACHED [docker-worker 5/5] 0.0s => [docker-worker] exporting to oci image format 117.7s => => exporting layers 0.3s => => exporting manifest sha256:blah 0.0s => => exporting config sha256:blah 0.0s => => sending tarball 117.6s
Same here. Incredibly painful building even not so large projects that should take seconds.
In case it's relevant to anyone - if you're using docker-for-mac then there's this issue about slow performance saving/loading tarballs that might be affecting you (AFAIK buildx --load
is just using docker load
under the hood): https://github.com/docker/for-mac/issues/6346#issuecomment-1304779119
As you can see, there's hopefully a fix for it in the next release. In the meantime a workaround is to disable the virtualization.framework
experimental feature
"Sending tarball" means you are running the build inside a container(or k8s or remote instance). While these are powerful modes (eg. for multi-platform) if you want to run the image you just built with local Docker, it needs to be transferred to Docker first. If your workflow is to build and then run in Docker all the time, then you should build with a Docker driver on buildx, because that driver does not have the "sending tarball" phase to make the result available as local Docker image.
You can read more about the drivers at https://github.com/docker/buildx/blob/master/docs/manuals/drivers/index.md
Latest proposal for speeding up the loading phase for other drivers https://github.com/moby/moby/issues/44369
@tonistiigi a-ha! that was it. At some point I used docker buildx create --use
and that overwrote the default builder with the Docker driver. Doing docker buildx ls
to find the builder with the Docker driver and then docker buildx use --default builder_name
fixed it for me! No more sending tarball step.
Hello. We use docker buildx build --push --output=type=image,push-by-digest=true
, and it seems to have the same issue as mentioned here
Thu, 03 Aug 2023 17:42:11 GMT #15 exporting to image
Thu, 03 Aug 2023 17:42:11 GMT #15 exporting layers
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting layers 281.9s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest sha256:a46bfbdf8f2e24cbc0812f178cdf81704222f9924a52c9c90feeb971afc5f2ca 0.0s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting config sha256:143c2a936b7a2cd51e108d683d6d6c7d4f7160e48543ca2977637cbb0829b848 done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting attestation manifest sha256:d336dfa04618341c715c5a10ac07eeda416b633cf15b83a39c28fba0d0662a43 0.0s done
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest list sha256:209725be101f2fe247081474b1057355dfbc1010de2581643d0a6c43e8dfda75
Thu, 03 Aug 2023 17:46:53 GMT #15 exporting manifest list sha256:209725be101f2fe247081474b1057355dfbc1010de2581643d0a6c43e8dfda75 0.0s done
But as far as I see, in https://github.com/docker/buildx/pull/1813 it should be addressed for the --output=docker
. Does it mean that the same could be done to increase the speed in our case too?
@tonistiigi But that will mean the user foregoes the advantages of the other build drivers. The issue is with the performance on sending tarballs.
As mentioned in https://github.com/docker/buildx/issues/626, https://github.com/moby/moby/issues/44369 is the docker engine-side requirement for this feature.
Something's off here...
/update: a restart of Docker Desktop solved the problem
Hello, I have encountered the same issue, but only when building images from Windows. Using the same command inside WSL works fine.
My setup is the following:
Both Windows and WSL Docker CLIs are using the same endpoint to connect to the singleton Podman server instance (in fact, I can see the same image and container set on both sides).
When I launch the following command inside WSL, it works fine:
docker build --platform linux/amd64 --load -t my-image:latest .
However, launching the same command from Windows Powershell, it stucks indefinitely on the "sending tarball" step.
/update: it seems an issue related to Powershell only. Running again the same command inside the old Windows CMD works fine as well.
I was experiencing the issue when building from MacOS...
When I build an image which already exists (because of a previous build on the same engine with 100% cache hit), the builder still spends a lot of time in "sending tarball". This causes a noticeable delay in the build. Perhaps this delay could be optimized away in the case of 100% cache hit?
For example, when building a 1.84GB image with 51 layers, the entire build is 9s, of which 8s is in "sending tarball" (see output below).
It would be awesome if fully cached builds returned at near-interactive speed!