Open mixmind opened 1 year ago
If in a parallel container that will be mapped to the same folder:/var/lib/docker and run with arm64, I will pull the amd64 image. And from the original rosetta container I will try to run container using that image, additional issue occurred:
INFO[2023-01-13T17:20:03.341556759+02:00] No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4] INFO[2023-01-13T17:20:03.341587717+02:00] IPv6 enabled; Adding default IPv6 external servers: [nameserver 2001:4860:4860::8888 nameserver 2001:4860:4860::8844] WARN[2023-01-13T17:20:03.345230092+02:00] seccomp is not enabled in your kernel, running container without default profile time="2023-01-13T17:20:03.443803926+02:00" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-13T17:20:03.446418217+02:00" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-13T17:20:03.446441051+02:00" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-13T17:20:03.448273342+02:00" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335 pid=4347 runtime=io.containerd.runc.v2 INFO[2023-01-13T17:20:03.585186259+02:00] shim disconnected id=0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335 WARN[2023-01-13T17:20:03.585303176+02:00] cleaning up after shim disconnected id=0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335 namespace=moby INFO[2023-01-13T17:20:03.585336009+02:00] cleaning up dead shim WARN[2023-01-13T17:20:03.690890884+02:00] cleanup warnings time="2023-01-13T17:20:03+02:00" level=info msg="starting signal loop" namespace=moby pid=4366 runtime=io.containerd.runc.v2 time="2023-01-13T17:20:03+02:00" level=warning msg="failed to read init pid file" error="open /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335/init.pid: no such file or directory" runtime=io.containerd.runc.v2 ERRO[2023-01-13T17:20:03.691394384+02:00] copy shim log error="read /proc/self/fd/16: file already closed" ERRO[2023-01-13T17:20:03.692102718+02:00] stream copy error: reading from a closed fifo ERRO[2023-01-13T17:20:03.737929551+02:00] 0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335 cleanup: failed to delete container from containerd: no such container ERRO[2023-01-13T17:20:03.737972301+02:00] Handler for POST /v1.41/containers/0410a84ac0f410716fa13726d19a56a286a8bfa4262e6920a0d7133909e04335/start returned error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: waiting for init preliminary setup: read init-p: connection reset by peer: unknown
Facing the same issue on Docker Desktop 4.16.1 Can't pull images using dind.
Interestingly, I've run into the same issue attempting to use dind in UTM with Rosetta.
I'm facing the same issue and cannot pull images inside a amd64
dind image or any custom image with docker installed.
"dockerd" accepts no argument(s). See 'dockerd --help'.
This sounds like this may be related to docker engine doing a "re-exec" of itself to launch a process in another namespace. The binary running under rosetta could perhaps affect this. 🤔
I'm curious though; what's the use-case to run a non-matching platform of docker-in-docker? (note that user land emulation either with QEMU or Rosetta will always be a best-effort)
So when dockerd pulls an image it does a busybox-style re-exec to run a chrooted process to safely extract the layer.
The way this works in dockerd is it calls exec("/proc/self/exe", "docker-untar")
and passes some stuff through an extra file descriptor.
What does that actually do on the system?
The first argument to exec
is the path to execute, and then the 2nd argument is the name of the program, which would normally be the same-ish thing as the path... but for dockerd's re-exec it uses the name to determine if it should startup like normal or execute some registered function.
We can simulate this with the shell:
docker run -it --platform=linux/arm64 --privileged --entrypoint=/bin/sh --rm docker:dind -c "exec -a docker-untar /usr/local/bin/dockerd"
read options: bad file descriptor%
The error there is expected because the docker-untar function is expecting to read from a file descriptor.
If we do this under rosetta, it seems rosetta is not passing along the program name and docker is just starting up like it was a normal startup.
docker run -it --platform=linux/amd64 --privileged --entrypoint=/bin/sh --rm docker:dind -c "exec -a docker-untar /usr/local/bin/dockerd"
INFO[2023-03-10T01:24:47.418901593Z] Starting up
WARN[2023-03-10T01:24:47.431102926Z] could not change group /var/run/docker.sock to docker: group docker not found
INFO[2023-03-10T01:24:47.437504301Z] libcontainerd: started new containerd process pid=15
INFO[2023-03-10T01:24:47.439554384Z] [core] [Channel #1] Channel created module=grpc
<output elided>
The actual error we see is because the pull logic is also passing an argument to the command and dockerd doesn't normally have arguments except for re-execs.
Also worth noting, I think this is fixed on master already since @corhere got rid of the re-execing... at least for tar/untar.
"dockerd" accepts no argument(s). See 'dockerd --help'.
This sounds like this may be related to docker engine doing a "re-exec" of itself to launch a process in another namespace. The binary running under rosetta could perhaps affect this. 🤔
I'm curious though; what's the use-case to run a non-matching platform of docker-in-docker? (note that user land emulation either with QEMU or Rosetta will always be a best-effort)
Some design mistakes that were made, and desire to work on Mac instead of windows:)
So when dockerd pulls an image it does a busybox-style re-exec to run a chrooted process to safely extract the layer. The way this works in dockerd is it calls
exec("/proc/self/exe", "docker-untar")
and passes some stuff through an extra file descriptor.What does that actually do on the system? The first argument to
exec
is the path to execute, and then the 2nd argument is the name of the program, which would normally be the same-ish thing as the path... but for dockerd's re-exec it uses the name to determine if it should startup like normal or execute some registered function.We can simulate this with the shell:
docker run -it --platform=linux/arm64 --privileged --entrypoint=/bin/sh --rm docker:dind -c "exec -a docker-untar /usr/local/bin/dockerd" read options: bad file descriptor%
The error there is expected because the docker-untar function is expecting to read from a file descriptor.
If we do this under rosetta, it seems rosetta is not passing along the program name and docker is just starting up like it was a normal startup.
docker run -it --platform=linux/amd64 --privileged --entrypoint=/bin/sh --rm docker:dind -c "exec -a docker-untar /usr/local/bin/dockerd" INFO[2023-03-10T01:24:47.418901593Z] Starting up WARN[2023-03-10T01:24:47.431102926Z] could not change group /var/run/docker.sock to docker: group docker not found INFO[2023-03-10T01:24:47.437504301Z] libcontainerd: started new containerd process pid=15 INFO[2023-03-10T01:24:47.439554384Z] [core] [Channel #1] Channel created module=grpc <output elided>
The actual error we see is because the pull logic is also passing an argument to the command and dockerd doesn't normally have arguments except for re-execs.
Very good and interesting explanation, thanks. Will try to launch master and check if it still occurs
Also worth noting, I think this is fixed on master already since @corhere got rid of the re-execing... at least for tar/untar.
It still fails on same error..
This is not specific to Docker Desktop either - getting the same error trying to test a dind
image on arm64 MacOS + Linux VM + Rosetta.
I am getting the same error as well when using Rosetta and mcr.microsoft.com/devcontainers/universal:latest
(devcontainer image which allows to use docker-inside-docker).
Example:
codespace ➜ ~ $ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
719385e32844: Extracting [==================================================>] 2.457kB/2.457kB
failed to register layer: Error processing tar file(exit status 1): "dockerd" accepts no argument(s).
See 'dockerd --help'.
Usage: dockerd [OPTIONS]
A self-sufficient runtime for containers.
Any idea when this will be fixed? Thanks!
In general, I would not recommend running docker
(in docker) itself with QEMU / rosetta emulation. The docker engine (when running with native arch) would still allow for containers to be run with non-matching platforms, either by passing --platform
when running images, or you can set the DOCKER_DEFAULT_PLATFORM
; also see https://github.com/containerd/containerd/pull/8533#issuecomment-1554813355
Expected behavior
dind can pull images with rosetta feature enabled and amd64 images
Actual behavior
dind failes to pull images with rosetta feature enabled
Information
Output of
/Applications/Docker.app/Contents/MacOS/com.docker.diagnose check
Steps to reproduce the behavior