Open mattwelke opened 3 years ago
I agree that it looks like a DNS issue. Do you have slowness looking up the same DNS addresses from WSL 2 outside the context of Docker?
@stephen-turner No, using dig
from WSL 2 is instant, including for popular domains like google.com
and for one of the domains the docker build
commands used:
~ > dig docker.io
; <<>> DiG 9.16.1-Ubuntu <<>> docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44701
;; flags: qr rd ad; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;docker.io. IN A
;; ANSWER SECTION:
docker.io. 0 IN A 52.55.43.248
docker.io. 0 IN A 3.220.75.233
docker.io. 0 IN A 34.192.114.9
docker.io. 0 IN A 34.204.125.5
docker.io. 0 IN A 52.6.170.51
docker.io. 0 IN A 34.200.7.11
;; Query time: 10 msec
;; SERVER: 172.26.160.1#53(172.26.160.1)
;; WHEN: Fri Jan 29 12:26:09 EST 2021
;; MSG SIZE rcvd: 132
Hmmm, that's strange. We do have our own DNS server, but other people aren't reporting that it's slow.
What's your environment like? Do you have any corporate security software that could be inserting itself on the network path?
This is my personal machine. I'm not using any VPNs or anti virus software besides the default that Windows 10 Pro runs. This is actually a fresh install of Windows. Less than two weeks old.
But I just tried disabling "Microsoft Defender Firewall" under "Private Network", and that improved it. For docker pull
, it's instant every time now. For docker build
, there's still a 10 second hang each time the image is built with new content in my source code files that hasn't been built before.
I don't understand right now why disabling the firewall helped. I'm also not very familiar with Windows security settings. I have more experience using Ubuntu for development. I remember using ufw
in Ubuntu to let apps and ports through, so I assume Windows has controls for this too. I could look into tweaking them, but I'd also need to know what apps/ports to allowlist in the settings, and why Docker in WSL 2 requires them to be allowlisted when it didn't before.
EDIT:
Turns out it still hangs sometimes with docker pull, still for about 10 seconds, even with the firewall turned off.
@stephen-turner
I also have another computer that I hadn't updated Docker Desktop for in a while. I was running 2.5.0.1 (49550) also on WSL 2 also using Ubuntu 20.04, before I updated Docker Desktop it a few minutes ago. I tested out some docker pull
s, and they were all instant. No problems, the same experience I've always had with both of my computers.
Then I updated it to the latest version as of right now, which is 3.1.0
, and it now also has the problem where the docker pull
operations hangs for about 10 seconds.
So for me, the issue is affecting more than one computer and it only occurs when updating to 3.1.0, however I can't pinpoint exactly which version greater than 2.5.0.1 introduced the regression.
I have the same issue, docker pull
hangs for exactly 30 seconds on WSL 2 - I tried using VM with Debian and it was instant. Disabling firewall like you mentioned actually helped and made pulling instant, although I'm not sure I want to have firewall disabled. Any decent workaround for this?
I recently reinstalled Windows twice and had this issue both times, but it wasn't present before that, so this might be a Windows/WSL 2 issue.
Turns out this problem is introduced in Docker Desktop 3.1.0 (which takes 59 seconds for me to pull docker/getting-started
image), while Docker Desktop 3.0.4 pulls almost instantaneously (4 seconds).
If there is any older version than 3.1.0 and newer than 3.0.4, I would love to try it. Until this gets fixed, I'm staying on 3.0.4.
Nice to see another person confirm the issue so I know I wasn't crazy. I've just been putting up with the delay since upgrading to 3.1.0 (and since reformatting my other machines, which I do quite frequently). But it sure would be nice to have instant pulls again.
I don't really know what to make of this. Given that we only have two people reporting it, I feel that it must be something very specific in your environment that is interacting with a change in Docker Desktop, but I'm not sure what.
True, it interrupts flow a lot, so I feel like if it were affecting more than 2 people, we'd see more reports of it here. I can try tweaking my environment on my computers to see if I can get it to stop. So far it's affected every computer I've used Docker Desktop and WSL2 on through multiple reformats.
Turns out it was actually a DNS issue - with IPv6. I disabled IPv6 via adapter settings in Windows on the main adapter (Ethernet for me) - this worked flawlessly. Then I set up CloudFlare's 1.1.1.1 DNSv6 and set a static IPv6. This actually worked and as far as I can see, times are the same as before on 3.0.1.
If it helps debugging this issue, I have 2.5GbE Realtek PCIe network card.
Interesting. I'll look into DNS issues on my end to see if that's the culprit too.
possibly caused by this? https://github.com/microsoft/WSL/issues/4901
@chuanqisun No, the issue didn't come up for me until that particular new version of Docker Desktop for Windows. Also, my internet connection is very fast inside WSL (I get my full 1Gbit/s up and down), it's just there is always a very long pause the first time a request for a particular Docker image is made. Other requests like doing an apt-get update
work instantly.
The issue continues to affect me for all computers I use Docker Desktop on, throughout each update there has been since this was reported.
Reporting in, been baffled by this exact bug for the last few days. No matter what image I pull it always takes 10 seconds to load the metadata. :\ Tried a different DNS as well.
@Kleptine Can't say I'm happy to hear you're experiencing this, because I know how frustrating it is. But I am glad to see a 3rd person reporting in also experiencing it. Hopefully with more reports, we can get it on the maintainers' radar.
I noticed something interesting today. tl;dr logging into Docker Hub removed this delay. But you have to log in before each pull step.
Details:
I googled the error tonight and found https://github.com/docker/buildx/issues/476 where one of the error messages someone posts includes the substring error getting credentials
. They must have had some sort of debug output mode on to see this. Or, maybe it only shows this if there's an error (and we're only experiencing a delay followed by eventual success).
After logging in, I pulled a few more images. To reproduce my problem, I have to pull an image I've never tried to pull before. I worked my way down Node.js versions until I got to version 6, which was a version I'd never pulled before. Presumably, Docker would have to fetch metadata for this version. It pulled instantly:
~ > docker login
Authenticating with existing credentials...
Login Succeeded
~ > docker pull node:8
8: Pulling from library/node
146bd6a88618: Already exists
9935d0c62ace: Already exists
db0efb86e806: Already exists
e705a4c4fd31: Already exists
c877b722db6f: Already exists
645c20ec8214: Already exists
db8fbd9db2fe: Already exists
1c151cd1b3ea: Already exists
fbd993995f40: Already exists
Digest: sha256:a681bf74805b80d03eb21a6c0ef168a976108a287a74167ab593fc953aac34df
Status: Downloaded newer image for node:8
docker.io/library/node:8
~ > docker pull node:6
6: Pulling from library/node
c5e155d5a1d1: Pull complete
221d80d00ae9: Pull complete
4250b3117dca: Pull complete
3b7ca19181b2: Pull complete
425d7b2a5bcc: Pull complete
69df12c70287: Pull complete
ea2f5386a42d: Pull complete
d421d2b3c5eb: Pull complete
Digest: sha256:e133e66ec3bfc98da0440e552f452e5cdf6413319d27a2db3b01ac4b319759b3
Status: Downloaded newer image for node:6
docker.io/library/node:6
BUT this only works if I log in again every time I want to pull an image I've never pulled before. If I log in, pull one (which will be instant), and then try to pull another, I get the delay. But if I write a one liner like docker login; docker pull python:3.4; docker login; docker pull python:3.3; docker login; docker pull python:3.2
, each will pull instantly.
Perhaps this delay is caused by some sort of issue with tokens, where tokens can only be used once and are refreshed after a 10 second timeout?
Unfortunately, this workaround doesn't solve most of the pain this problem causes because most of the pain comes from the many pull steps that occur in processes like building an image or starting a docker-compose stack. I can't have it run docker login
before each pull step in such processes.
EDIT:
I tested this with another one liner (for VERSION in 10 9.6 9.5 9.4; do docker login; docker pull postgres:$VERSION; done
) and noticed that the first login took 10 seconds too. Then, each login after that was instant. All pulls were instant.
I tried a docker login
and that command also took about 10-15 seconds! Brutal.
After logging in, the subsequent docker build
metadata step is only 1 second.
However, the immediate build after, it's back to 10 seconds again. This doesn't actually seem to work-around the problem, just moves it to the docker login
command. But it does seem to show this is some sort of issue with dockerhub and windows networking.
For reference, I am also using WSL2. I installed docker desktop on my personal computer just a few days back, so everything is a fresh install here. Nothing fancy on my machine, it's just my personal home box.
I tried disabling IPv6 on the network adapter, which results in the following behavior:
On the first run after, I get a new line, which shows up while it is loading metadata:
=> [auth] library/ubuntu:pull token for registry-1.docker.io
This first run still takes 10 seconds. On the next few subsequent runs, the metadata load is instant, back to where it should be. However, after waiting for a little while (a few minutes in my tests). The next docker build
will again take 10 seconds on metadata.
So IPv6 improves things but does not solve the issue, and disabling IPv6 system wide is not really a useable workaround. The fact that I got new output definitely makes me think it's a networking issue for sure.
Yeah it's some sort of networking issue. I confirmed earlier that it happened when I upgraded from Docker Desktop 2.5.0.1
to 3.1.0
. Another user commented that for them, it happened when they updated from 3.0.4
to 3.1.0
. So it looks like 3.1.0
is the update that introduced the regression.
Just wanted to report that I am experiencing this too. I am on a fairly fresh Windows 10 machine running WSL 2 and docker 3.3.3.
Running docker login
seemed to momentarily fix the issue with the following observation:
docker build
20 times in quick succession without the lag (up + Enter, up + Enter, ...)I also observed that if I run a build multiple times in quick succession (without login), the first one is slow and the next ones are fast. After waiting about 10-20 seconds and the build becomes slow again. So it appears that any communication with docker.io is initially slow, then its fast as long as you are running commands quickly, and then becomes slow again after a ~10s pause.
If you're reading this and you're experiencing it, speak up! We need as many comments as possible left on this issue so that it can get attention from the devs! I will say, having that mandatory pause while my images pull has done wonders for my mental health. It's nice to take breaks now and then, and remember to smell the roses.
We managed to work around this issue (on Debian, Docker Version 20.10.7) by explicitly -not- using Docker BuildKit:
To see whether Docker BuildKit is enabled:
echo $DOCKER_BUILDKIT
Unset:
unset DOCKER_BUILDKIT
Or:
export DOCKER_BUILDKIT=
Worked without hanging afterwards.
Didn't work for me. Looks like I wasn't using BuildKit in the first place. If I run echo $DOCKER_BUILDKIT
I get nothing. And it still has a huge pause:
Didn't work for me. Looks like I wasn't using BuildKit in the first place. If I run
echo $DOCKER_BUILDKIT
I get nothing. And it still has a huge pause:
I can reproduce this 100%, same issue
@daniandl Can you confirm? Is is that you have the same result as @kaner where if you unset the DOCKER_BUILDKIT
env var, you don't get the hanging? Or is that you reproduced what I experienced, where you didn't have the env var set in the first place, and your docker pull hung?
I can confirm that I don't have DOCKER_BUILDKIT enabled, and still see the issue.
@daniandl Can you confirm? Is is that you have the same result as @kaner where if you unset the
DOCKER_BUILDKIT
env var, you don't get the hanging? Or is that you reproduced what I experienced, where you didn't have the env var set in the first place, and your docker pull hung?
Reproduced what you had, no env var existed, still manually unset it, still hanging
WSL2 Ubuntu: 5.4.72-microsoft-standard-WSL2
Docker version: Docker version 20.10.7, build f0df350
Docker desktop: Docker desktop: 3.5.2 (66501)
Having the same issue, seemingly. Although it seems to only be on some image versions.
I.e. docker build
with node:16.5.0-alpine3.13
is slow and builds with node:15.10.0-alpine3.13
are fast! I can switch between the two and reproduce this every time - both are cached. It appears to be the load metadata for docker.io/library/node:16.5.0-alpine3.13
step or [auth] library/node:pull token for registry-1.docker.io
that waits for 12 seconds each time (very consistently).
Here's a screen shot of the log when it's being slow:
Here's when it's being fast on the earlier version:
Interesting to note that there's no [auth] library/node:pull token for registry-1.docker.io
step in the faster (older) version. I'm not sure what means though!
Here's some lovely version numbers: Docker engine: 20.10.7, build f0df350 Docker desktop: 3.5.2 (66501) Compose: 1.29.2 Credential Helper: 0.6.4 Using WSL2 (ubuntu base image)
I've no DOCKER_BUILDKIT stuff either.
Hope that helps!
I decided to try to reproduce your issue with those two particular images. For me, it doesn't matter which one I pull, the one I pull first after it's been a while since pulling any image is slow.
Here's where I pull both images so they're cached and then immediately after, pull 16 and then 15. Both are fast:
~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13
real 0m1.439s
user 0m0.130s
sys 0m0.037s
~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13
real 0m1.402s
user 0m0.122s
sys 0m0.041s
Then I waited about a minute and pulled 16 and then 15. 16 was slow but 15 was fast:
~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13
real 0m17.295s
user 0m0.123s
sys 0m0.063s
~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13
real 0m1.348s
user 0m0.072s
sys 0m0.095s
And then waiting another minute and pulling 15 and then 16. 15 was slow but 16 was fast:
~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13
real 0m17.316s
user 0m0.091s
sys 0m0.081s
~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13
real 0m1.368s
user 0m0.109s
sys 0m0.054s
How odd! I feel silly now, as both of them are going slowly and nothing is fast 😅
At least we're both seeing the same thing now. How very odd that it was faster for a while with nothing changing besides the version.
I've since tried completely resetting docker to factory defaults via the troubleshooting tools, but that's done nothing to help.
I've done about 10 reformats and reinstallations of my OS across 3 or 4 devices by now (not to try to fix this issue in particular, I just tend to reformat a lot, swapping PC parts, etc). Always the same problem. Definitely a problem even when default settings are used.
After finally being annoyed for a few months at waiting about 10s ... I found this issue. I've tried a few solutions here including disabling IPV6 (as IPV6 still isn't in my area!) to no avail. However, I have quite a few adapters from working with Windows VMs, so I didn't try disabling IPV6 everywhere...
I have no idea what a solution looks like, but the 10s wait to build a container that usually takes <1s is driving me bananas...
Agreed. I have no problem with some CI/CD process that normally takes 5 minutes taking 5 minutes and 10 seconds instead. What drives me crazy is my short loop on my workstation, where I want to use Docker and rebuild an image very frequently. Or start up a docker-compose stack frequently.
@stephen-turner it would be great if someone could look into this further. I imagine most people aren't annoyed by it (ie, their images take a while anyway) they won't find themselves on this issue or even notice the issue. It's when you're used to an image building in less than a second and it now taking at least 10 that you'll notice it. Even then, it's barely enough to annoy you unless you're specifically working on the Dockerfile (like I was when I decided to try and google the issue).
A work-around: docker pull <image>
before running the docker build. All subsequent builds will be fast.
I think there is something wrong with the caching...
A work-around:
docker pull <image>
before running the docker build. All subsequent builds will be fast.I think there is something wrong with the caching...
This doesn't work. Once it's been a while since you pulled the image, it'll be slow again. And that initial "docker pull" that describe here to speed up the "docker build" will itself be slow, defeating the purpose.
This doesn't work. Once it's been a while since you pulled the image, it'll be slow again. And that initial "docker pull" that describe here to speed up the "docker build" will itself be slow, defeating the purpose.
I do many docker builds a day. The one manual pull from time to time, saves me a lot of time.
Of course it is still a work around, not a real solution.
Sounds like we're experiencing the bug differently then. For me, after 10 seconds of not pulling images, my next image pull of any image will have that 10-17 second delay. This is enough to make the workaround not work for me, because I would need to do a manual slow pull before every pull that's part of my dev loop.
Taking a look at a tcpdump
in WSL indicates that it doesn't seem to be doing anything on the network. At least from a WSL client:
sudo tcpdump -B 999999 &
(setsid docker build .) < /dev/null |& cat # to emulate no tty output
Ok, I'm able to see that it takes ~5s for the client to send an ACK
packet back after receiving some data, and then another 5s to send do something after a final ACK
from the server. I'm going to try and MITM the encryption to see what these packets are.
FWIW, running a local registry mirror appears to reduce the amount of times this happens...
Create a config.yml
with the following contents:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
proxy:
remoteurl: https://registry-1.docker.io
Then start it up
docker run -p 5000:5000 --restart always -d -v /path/to/config.yml:/etc/docker/registry/config.yml registry:2
In the settings, go to Docker Engine and update the json document there to point to the registry. Here's my contents for example:
{
"registry-mirrors": [
"http://localhost:5000"
],
"insecure-registries": [
"http://localhost:5000"
],
"debug": false,
"experimental": false,
"features": {
"buildkit": true
},
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "20GB"
}
}
}
I was due for another reformat so I tried another workaround this time. Instead of using Docker Desktop, I just installed Docker inside WSL 2 the way one normally would install it on Ubuntu (https://docs.docker.com/engine/install/ubuntu/), and so far, none of the features I rely on for my day to day work (like being able to run a web server from within a container and view it in my browser on the Windows side) have broken. So I'm going to continue like this for a while. Coming from Ubuntu without WSL 2, this actually feels more natural to me anyways. Docker Desktop was extra complexity I had to learn in order to continue using Docker when I switched to Windows:
> which docker
/usr/bin/docker
Docker Desktop provided other tools, like kubectl
, but I just install those separately if I need them. For example, for kubectl
, I can install it via APT or via the gcloud components manager. I used to do it via gcloud. Now, I'm just using APT.
~ > which kubectl
/usr/bin/kubectl
This has worked well for me so far. I'm able to pull images quickly even after waiting a while before the last pull:
> time docker pull mongo
Using default tag: latest
latest: Pulling from library/mongo
Digest: sha256:d78c7ace6822297a7e1c7076eb9a7560a81a6ef856ab8d9cde5d18438ca9e8bf
Status: Image is up to date for mongo:latest
docker.io/library/mongo:latest
real 0m0.426s
user 0m0.024s
sys 0m0.000s
Normally, I'd follow the post installation setup steps (https://docs.docker.com/engine/install/linux-postinstall/) to add myself to the docker
group so that I don't need to use sudo
and to enable the systemd services so that it starts automatically at boot. Adding my user to the group works fine. But even after running the commands to enable the services:
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
...and then restarting to test it out, it doesn't work:
> docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I have to manually run a command to start the service. And it isn't a systemctl
command:
> sudo systemctl start docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
It's a service
command:
> sudo service docker start
* Starting Docker: docker [ OK ]
Then, until I shut my computer down, I can use Docker:
> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b81b92fedd7c mwelke/geolite2-web-service "/main" 4 days ago Exited (2) 4 days ago my-geo
This works well for me because my concern here was having such a long pause for every single pull operation (including those done automatically by build operations), which interrupted my flow while I was trying to work. Having to run one command that always finishes instantly, at the start of my work day, works fine for me.
Something I do only once in a while is test my app inside Kubernetes with a local Kubernetes cluster. On Ubuntu, I would use kind (https://github.com/kubernetes-sigs/kind) for this, but this doesn't seem to work with this approach:
~ > which kind
/home/matt/go/bin/kind
~ > kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✗ Starting control-plane 🕹️
This isn't a deal breaker for me right now. It's very rare I'd need to test something in Kubernetes. Usually I just test it in isolation in a Docker container and then deploy it, assuming it'll interact with the other containers properly. Also, I can still use kind in automated tests in my CI/CD (I've tested it with GitHub Actions and it worked fine - https://github.com/mattwelke/kind-and-ow-on-github-actions-test/blob/main/.github/workflows/koga.yaml) so I can just use kind in my outer loop instead of inner loop.
That sounds really interesting @mattwelke, can you invoke Docker from the Windows side? I'd switch to this approach in a heartbeat if it allowed Intellij and friends to invoke Docker containers without issue. Although I suppose I could install an x-server and run Intellij from Ubuntu if all else fails. The 10-12 second delay is really grating on me now.
I too am seeing this issue lately. WSL2 building my devcontainer in VSCODE taking a long time to get started. Mine are taking over 70 seconds to get started.
@withinboredom No, in this case I'm not using Docker at all on the Windows side. It's just a program installed in Ubuntu in WSL 2. My use case for WSL 2 is to just have a self contained Linux development environment where I use Windows as a GUI. It's nice because Ubuntu as a GUI ended up giving me problems (I have a funky two monitor setup with atypical resolutions) and with Windows running, I can multitask and play games while coding, stuff like that. :P
I am also seeing this issue - a docker build of mine was repeatedly hanging on load metadata for docker.io...
for 60+ seconds each time, even when all other build steps were cached.
Windows 10 19041.1110 WSL 2 Docker Desktop 3.5.2
DOCKER_BUILDKIT
is not set in the shell from which I call docker, but the output of builds looks similar to what I would expect from enabling buildkit in earlier versions (did it get enabled by default at some point?).
The docker pull
workaround mentioned above works for me - the pull also hangs, but once it completes then subsequent builds do not.
DOCKER_BUILDKIT is not set in the shell from which I call docker, but the output of builds looks similar to what I would expect from enabling buildkit in earlier versions (did it get enabled by default at some point?).
I experienced this too. At the same time as the bug appeared for me (in January 2021), I noticed the build output looked very different from normal. It had color in it and seemed to describe each step of the build in more detail, including live updating counters for the number of tenths of a second that had passed since that step of the build succeeded. My screenshots at the top of this issue from January show what the docker build output started to look like for me.
I'm also seeing random curl-lib requests take ~10s from containers (causing timeouts).
The next request will work fine. This only happens in docker-for-windows. I do not see any delays with dig
or other tools.
Finally, found a solution...
I added this to my docker engine config, which fixed my curl
dns woes:
"dns": [ "8.8.8.8" ],
But it seems whatever settings buildkit is using, it's not respecting the engine's dns configuration.
However, we can use buildx to remove the delay I'm seeing, type these commands and try running some builds:
docker buildx install
INSTANCE=$(docker buildx create)
docker buildx use $INSTANCE
docker build (--load|--push) [options] <file>
In my particular case, it turns out that whatever docker for windows is using as a dns proxy sometimes gets hung up for quite awhile.
I checked the steps described in https://docs.docker.com/docker-for-windows/troubleshoot/ and I did the "purge data" and "reset to factory settings" options. Neither fixed my issue.
I used to have none of these pauses using Docker on WSL2. I'd run "docker pull" or "docker build" and it would begin immediately. I'm not sure if it's the latest update for me (this is a fresh Windows 10 install) that's causing this, but now it's very slow when I start. When I run
docker pull postgres
it hangs onUsing default tag: latest
for about 10 seconds. It also hangs this way when I rundocker pull postgre
(a non-existent image) it still has the pause. So I'm thinking it might be related to DNS.When I tried doing a build I saw this happen again, because now my build output is different. It shows more steps and has color. The very first step mentioned getting metadata, and that step always took 10 seconds:
After the step finishes, the rest proceeds quickly.
Since I'm caching Python dependencies, the rest finishes instantly since it just has to copy in my changed source code files, but I still have to wait that 10 seconds each time:
My Dockerfile, for reference:
Note that this affects all images I tried, not just
postgres
andpython
. I also found that the pause withdocker pull
happened withnode
,mongo
,openjdk
, andamazon/opendistro-for-elasticsearch
.Actual behavior
docker pull
operations take a long time to start.docker build
operations take 10 seconds to finish the first step which appears to be doing nothing.Expected behavior
docker pull
begins downloading the first layer of the image quickly anddocker build
's first step finishes almost instantly instead of 10 seconds.Information
Please, help us understand the problem. For instance:
Steps to reproduce the behavior
Described above.