docker / for-win

Bug reports for Docker Desktop for Windows
https://www.docker.com/products/docker#/windows
1.83k stars 279 forks source link

"load metadata for" during docker pull hangs for about 10-20 seconds #10247

Open mattwelke opened 3 years ago

mattwelke commented 3 years ago

I checked the steps described in https://docs.docker.com/docker-for-windows/troubleshoot/ and I did the "purge data" and "reset to factory settings" options. Neither fixed my issue.

I used to have none of these pauses using Docker on WSL2. I'd run "docker pull" or "docker build" and it would begin immediately. I'm not sure if it's the latest update for me (this is a fresh Windows 10 install) that's causing this, but now it's very slow when I start. When I run docker pull postgres it hangs on Using default tag: latest for about 10 seconds. It also hangs this way when I run docker pull postgre (a non-existent image) it still has the pause. So I'm thinking it might be related to DNS.

When I tried doing a build I saw this happen again, because now my build output is different. It shows more steps and has color. The very first step mentioned getting metadata, and that step always took 10 seconds:

image

After the step finishes, the rest proceeds quickly.

Since I'm caching Python dependencies, the rest finishes instantly since it just has to copy in my changed source code files, but I still have to wait that 10 seconds each time:

image

My Dockerfile, for reference:

FROM python:3.9-buster

ENV PYTHONUNBUFFERED=1 \
  POETRY_HOME=/opt/poetry \
  POETRY_NO_INTERACTION=1 \
  POETRY_VERSION=1.1.4 \
  POETRY_VIRTUALENVS_CREATE=false

ENV PATH="$POETRY_HOME/bin:$VENV_PATH/bin:$PATH"

WORKDIR /usr/src/app

RUN curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -

COPY pyproject.toml poetry.lock /usr/src/app/

RUN poetry install --no-dev

COPY wisdom_demo_beacon_generator wisdom_demo_beacon_generator
COPY searches_top_100.csv searches_top_100.csv

RUN poetry install --no-dev

CMD [ "python", "wisdom_demo_beacon_generator/main.py" ]

Note that this affects all images I tried, not just postgres and python. I also found that the pause with docker pull happened with node, mongo, openjdk, and amazon/opendistro-for-elasticsearch.

Actual behavior

docker pull operations take a long time to start. docker build operations take 10 seconds to finish the first step which appears to be doing nothing.

Expected behavior

docker pull begins downloading the first layer of the image quickly and docker build's first step finishes almost instantly instead of 10 seconds.

Information

Please, help us understand the problem. For instance:

Steps to reproduce the behavior

Described above.

stephen-turner commented 3 years ago

I agree that it looks like a DNS issue. Do you have slowness looking up the same DNS addresses from WSL 2 outside the context of Docker?

mattwelke commented 3 years ago

@stephen-turner No, using dig from WSL 2 is instant, including for popular domains like google.com and for one of the domains the docker build commands used:

~ > dig docker.io

; <<>> DiG 9.16.1-Ubuntu <<>> docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44701
;; flags: qr rd ad; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;docker.io.                     IN      A

;; ANSWER SECTION:
docker.io.              0       IN      A       52.55.43.248
docker.io.              0       IN      A       3.220.75.233
docker.io.              0       IN      A       34.192.114.9
docker.io.              0       IN      A       34.204.125.5
docker.io.              0       IN      A       52.6.170.51
docker.io.              0       IN      A       34.200.7.11

;; Query time: 10 msec
;; SERVER: 172.26.160.1#53(172.26.160.1)
;; WHEN: Fri Jan 29 12:26:09 EST 2021
;; MSG SIZE  rcvd: 132
stephen-turner commented 3 years ago

Hmmm, that's strange. We do have our own DNS server, but other people aren't reporting that it's slow.

What's your environment like? Do you have any corporate security software that could be inserting itself on the network path?

mattwelke commented 3 years ago

This is my personal machine. I'm not using any VPNs or anti virus software besides the default that Windows 10 Pro runs. This is actually a fresh install of Windows. Less than two weeks old.

But I just tried disabling "Microsoft Defender Firewall" under "Private Network", and that improved it. For docker pull, it's instant every time now. For docker build, there's still a 10 second hang each time the image is built with new content in my source code files that hasn't been built before.

I don't understand right now why disabling the firewall helped. I'm also not very familiar with Windows security settings. I have more experience using Ubuntu for development. I remember using ufw in Ubuntu to let apps and ports through, so I assume Windows has controls for this too. I could look into tweaking them, but I'd also need to know what apps/ports to allowlist in the settings, and why Docker in WSL 2 requires them to be allowlisted when it didn't before.

EDIT:

Turns out it still hangs sometimes with docker pull, still for about 10 seconds, even with the firewall turned off.

mattwelke commented 3 years ago

@stephen-turner I also have another computer that I hadn't updated Docker Desktop for in a while. I was running 2.5.0.1 (49550) also on WSL 2 also using Ubuntu 20.04, before I updated Docker Desktop it a few minutes ago. I tested out some docker pulls, and they were all instant. No problems, the same experience I've always had with both of my computers.

Then I updated it to the latest version as of right now, which is 3.1.0, and it now also has the problem where the docker pull operations hangs for about 10 seconds.

So for me, the issue is affecting more than one computer and it only occurs when updating to 3.1.0, however I can't pinpoint exactly which version greater than 2.5.0.1 introduced the regression.

Timic3 commented 3 years ago

I have the same issue, docker pull hangs for exactly 30 seconds on WSL 2 - I tried using VM with Debian and it was instant. Disabling firewall like you mentioned actually helped and made pulling instant, although I'm not sure I want to have firewall disabled. Any decent workaround for this?

I recently reinstalled Windows twice and had this issue both times, but it wasn't present before that, so this might be a Windows/WSL 2 issue.

Timic3 commented 3 years ago

Turns out this problem is introduced in Docker Desktop 3.1.0 (which takes 59 seconds for me to pull docker/getting-started image), while Docker Desktop 3.0.4 pulls almost instantaneously (4 seconds).

If there is any older version than 3.1.0 and newer than 3.0.4, I would love to try it. Until this gets fixed, I'm staying on 3.0.4.

mattwelke commented 3 years ago

Nice to see another person confirm the issue so I know I wasn't crazy. I've just been putting up with the delay since upgrading to 3.1.0 (and since reformatting my other machines, which I do quite frequently). But it sure would be nice to have instant pulls again.

stephen-turner commented 3 years ago

I don't really know what to make of this. Given that we only have two people reporting it, I feel that it must be something very specific in your environment that is interacting with a change in Docker Desktop, but I'm not sure what.

mattwelke commented 3 years ago

True, it interrupts flow a lot, so I feel like if it were affecting more than 2 people, we'd see more reports of it here. I can try tweaking my environment on my computers to see if I can get it to stop. So far it's affected every computer I've used Docker Desktop and WSL2 on through multiple reformats.

Timic3 commented 3 years ago

Turns out it was actually a DNS issue - with IPv6. I disabled IPv6 via adapter settings in Windows on the main adapter (Ethernet for me) - this worked flawlessly. Then I set up CloudFlare's 1.1.1.1 DNSv6 and set a static IPv6. This actually worked and as far as I can see, times are the same as before on 3.0.1.

If it helps debugging this issue, I have 2.5GbE Realtek PCIe network card.

mattwelke commented 3 years ago

Interesting. I'll look into DNS issues on my end to see if that's the culprit too.

chuanqisun commented 3 years ago

possibly caused by this? https://github.com/microsoft/WSL/issues/4901

mattwelke commented 3 years ago

@chuanqisun No, the issue didn't come up for me until that particular new version of Docker Desktop for Windows. Also, my internet connection is very fast inside WSL (I get my full 1Gbit/s up and down), it's just there is always a very long pause the first time a request for a particular Docker image is made. Other requests like doing an apt-get update work instantly.

The issue continues to affect me for all computers I use Docker Desktop on, throughout each update there has been since this was reported.

Kleptine commented 3 years ago

Reporting in, been baffled by this exact bug for the last few days. No matter what image I pull it always takes 10 seconds to load the metadata. :\ Tried a different DNS as well.

mattwelke commented 3 years ago

@Kleptine Can't say I'm happy to hear you're experiencing this, because I know how frustrating it is. But I am glad to see a 3rd person reporting in also experiencing it. Hopefully with more reports, we can get it on the maintainers' radar.

mattwelke commented 3 years ago

I noticed something interesting today. tl;dr logging into Docker Hub removed this delay. But you have to log in before each pull step.

Details:

I googled the error tonight and found https://github.com/docker/buildx/issues/476 where one of the error messages someone posts includes the substring error getting credentials. They must have had some sort of debug output mode on to see this. Or, maybe it only shows this if there's an error (and we're only experiencing a delay followed by eventual success).

After logging in, I pulled a few more images. To reproduce my problem, I have to pull an image I've never tried to pull before. I worked my way down Node.js versions until I got to version 6, which was a version I'd never pulled before. Presumably, Docker would have to fetch metadata for this version. It pulled instantly:

~ > docker login
Authenticating with existing credentials...
Login Succeeded
~ > docker pull node:8
8: Pulling from library/node
146bd6a88618: Already exists
9935d0c62ace: Already exists
db0efb86e806: Already exists
e705a4c4fd31: Already exists
c877b722db6f: Already exists
645c20ec8214: Already exists
db8fbd9db2fe: Already exists
1c151cd1b3ea: Already exists
fbd993995f40: Already exists
Digest: sha256:a681bf74805b80d03eb21a6c0ef168a976108a287a74167ab593fc953aac34df
Status: Downloaded newer image for node:8
docker.io/library/node:8
~ > docker pull node:6
6: Pulling from library/node
c5e155d5a1d1: Pull complete
221d80d00ae9: Pull complete
4250b3117dca: Pull complete
3b7ca19181b2: Pull complete
425d7b2a5bcc: Pull complete
69df12c70287: Pull complete
ea2f5386a42d: Pull complete
d421d2b3c5eb: Pull complete
Digest: sha256:e133e66ec3bfc98da0440e552f452e5cdf6413319d27a2db3b01ac4b319759b3
Status: Downloaded newer image for node:6
docker.io/library/node:6

BUT this only works if I log in again every time I want to pull an image I've never pulled before. If I log in, pull one (which will be instant), and then try to pull another, I get the delay. But if I write a one liner like docker login; docker pull python:3.4; docker login; docker pull python:3.3; docker login; docker pull python:3.2, each will pull instantly.

Perhaps this delay is caused by some sort of issue with tokens, where tokens can only be used once and are refreshed after a 10 second timeout?

Unfortunately, this workaround doesn't solve most of the pain this problem causes because most of the pain comes from the many pull steps that occur in processes like building an image or starting a docker-compose stack. I can't have it run docker login before each pull step in such processes.

EDIT:

I tested this with another one liner (for VERSION in 10 9.6 9.5 9.4; do docker login; docker pull postgres:$VERSION; done) and noticed that the first login took 10 seconds too. Then, each login after that was instant. All pulls were instant.

Kleptine commented 3 years ago

I tried a docker login and that command also took about 10-15 seconds! Brutal.

After logging in, the subsequent docker build metadata step is only 1 second.

However, the immediate build after, it's back to 10 seconds again. This doesn't actually seem to work-around the problem, just moves it to the docker login command. But it does seem to show this is some sort of issue with dockerhub and windows networking.

For reference, I am also using WSL2. I installed docker desktop on my personal computer just a few days back, so everything is a fresh install here. Nothing fancy on my machine, it's just my personal home box.

Kleptine commented 3 years ago

I tried disabling IPv6 on the network adapter, which results in the following behavior:

On the first run after, I get a new line, which shows up while it is loading metadata: => [auth] library/ubuntu:pull token for registry-1.docker.io image

This first run still takes 10 seconds. On the next few subsequent runs, the metadata load is instant, back to where it should be. However, after waiting for a little while (a few minutes in my tests). The next docker build will again take 10 seconds on metadata.

So IPv6 improves things but does not solve the issue, and disabling IPv6 system wide is not really a useable workaround. The fact that I got new output definitely makes me think it's a networking issue for sure.

mattwelke commented 3 years ago

Yeah it's some sort of networking issue. I confirmed earlier that it happened when I upgraded from Docker Desktop 2.5.0.1 to 3.1.0. Another user commented that for them, it happened when they updated from 3.0.4 to 3.1.0. So it looks like 3.1.0 is the update that introduced the regression.

DesignByOnyx commented 2 years ago

Just wanted to report that I am experiencing this too. I am on a fairly fresh Windows 10 machine running WSL 2 and docker 3.3.3.

Running docker login seemed to momentarily fix the issue with the following observation:

I also observed that if I run a build multiple times in quick succession (without login), the first one is slow and the next ones are fast. After waiting about 10-20 seconds and the build becomes slow again. So it appears that any communication with docker.io is initially slow, then its fast as long as you are running commands quickly, and then becomes slow again after a ~10s pause.

mattwelke commented 2 years ago

If you're reading this and you're experiencing it, speak up! We need as many comments as possible left on this issue so that it can get attention from the devs! I will say, having that mandatory pause while my images pull has done wonders for my mental health. It's nice to take breaks now and then, and remember to smell the roses.

image

kaner commented 2 years ago

We managed to work around this issue (on Debian, Docker Version 20.10.7) by explicitly -not- using Docker BuildKit:

To see whether Docker BuildKit is enabled: echo $DOCKER_BUILDKIT

Unset: unset DOCKER_BUILDKIT Or: export DOCKER_BUILDKIT=

Worked without hanging afterwards.

mattwelke commented 2 years ago

Didn't work for me. Looks like I wasn't using BuildKit in the first place. If I run echo $DOCKER_BUILDKIT I get nothing. And it still has a huge pause:

python-pull-windows

daniandl commented 2 years ago

Didn't work for me. Looks like I wasn't using BuildKit in the first place. If I run echo $DOCKER_BUILDKIT I get nothing. And it still has a huge pause:

I can reproduce this 100%, same issue

mattwelke commented 2 years ago

@daniandl Can you confirm? Is is that you have the same result as @kaner where if you unset the DOCKER_BUILDKIT env var, you don't get the hanging? Or is that you reproduced what I experienced, where you didn't have the env var set in the first place, and your docker pull hung?

Kleptine commented 2 years ago

I can confirm that I don't have DOCKER_BUILDKIT enabled, and still see the issue.

daniandl commented 2 years ago

@daniandl Can you confirm? Is is that you have the same result as @kaner where if you unset the DOCKER_BUILDKIT env var, you don't get the hanging? Or is that you reproduced what I experienced, where you didn't have the env var set in the first place, and your docker pull hung?

Reproduced what you had, no env var existed, still manually unset it, still hanging

WSL2 Ubuntu: 5.4.72-microsoft-standard-WSL2 Docker version: Docker version 20.10.7, build f0df350 Docker desktop: Docker desktop: 3.5.2 (66501)

ColinBradleyDriveWorks commented 2 years ago

Having the same issue, seemingly. Although it seems to only be on some image versions. I.e. docker build with node:16.5.0-alpine3.13 is slow and builds with node:15.10.0-alpine3.13 are fast! I can switch between the two and reproduce this every time - both are cached. It appears to be the load metadata for docker.io/library/node:16.5.0-alpine3.13 step or [auth] library/node:pull token for registry-1.docker.io that waits for 12 seconds each time (very consistently).

Here's a screen shot of the log when it's being slow: image

Here's when it's being fast on the earlier version: image

Interesting to note that there's no [auth] library/node:pull token for registry-1.docker.io step in the faster (older) version. I'm not sure what means though!

Here's some lovely version numbers: Docker engine: 20.10.7, build f0df350 Docker desktop: 3.5.2 (66501) Compose: 1.29.2 Credential Helper: 0.6.4 Using WSL2 (ubuntu base image)

I've no DOCKER_BUILDKIT stuff either.

Hope that helps!

mattwelke commented 2 years ago

I decided to try to reproduce your issue with those two particular images. For me, it doesn't matter which one I pull, the one I pull first after it's been a while since pulling any image is slow.

Here's where I pull both images so they're cached and then immediately after, pull 16 and then 15. Both are fast:

~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13

real    0m1.439s
user    0m0.130s
sys     0m0.037s
~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13

real    0m1.402s
user    0m0.122s
sys     0m0.041s

Then I waited about a minute and pulled 16 and then 15. 16 was slow but 15 was fast:

~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13

real    0m17.295s
user    0m0.123s
sys     0m0.063s
~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13

real    0m1.348s
user    0m0.072s
sys     0m0.095s

And then waiting another minute and pulling 15 and then 16. 15 was slow but 16 was fast:

~ > time docker pull node:15.10.0-alpine3.13
15.10.0-alpine3.13: Pulling from library/node
Digest: sha256:255f13ef7d291034d960343d71962c6900d7f6e449b8ba733d9cc920a0acc070
Status: Image is up to date for node:15.10.0-alpine3.13
docker.io/library/node:15.10.0-alpine3.13

real    0m17.316s
user    0m0.091s
sys     0m0.081s
~ > time docker pull node:16.5.0-alpine3.13
16.5.0-alpine3.13: Pulling from library/node
Digest: sha256:50b33102c307e04f73817dad87cdae145b14782875495ddd950b5a48e4937c70
Status: Image is up to date for node:16.5.0-alpine3.13
docker.io/library/node:16.5.0-alpine3.13

real    0m1.368s
user    0m0.109s
sys     0m0.054s
ColinBradleyDriveWorks commented 2 years ago

How odd! I feel silly now, as both of them are going slowly and nothing is fast 😅

image

At least we're both seeing the same thing now. How very odd that it was faster for a while with nothing changing besides the version.

I've since tried completely resetting docker to factory defaults via the troubleshooting tools, but that's done nothing to help.

mattwelke commented 2 years ago

I've done about 10 reformats and reinstallations of my OS across 3 or 4 devices by now (not to try to fix this issue in particular, I just tend to reformat a lot, swapping PC parts, etc). Always the same problem. Definitely a problem even when default settings are used.

withinboredom commented 2 years ago

After finally being annoyed for a few months at waiting about 10s ... I found this issue. I've tried a few solutions here including disabling IPV6 (as IPV6 still isn't in my area!) to no avail. However, I have quite a few adapters from working with Windows VMs, so I didn't try disabling IPV6 everywhere...

image

I have no idea what a solution looks like, but the 10s wait to build a container that usually takes <1s is driving me bananas...

mattwelke commented 2 years ago

Agreed. I have no problem with some CI/CD process that normally takes 5 minutes taking 5 minutes and 10 seconds instead. What drives me crazy is my short loop on my workstation, where I want to use Docker and rebuild an image very frequently. Or start up a docker-compose stack frequently.

withinboredom commented 2 years ago

@stephen-turner it would be great if someone could look into this further. I imagine most people aren't annoyed by it (ie, their images take a while anyway) they won't find themselves on this issue or even notice the issue. It's when you're used to an image building in less than a second and it now taking at least 10 that you'll notice it. Even then, it's barely enough to annoy you unless you're specifically working on the Dockerfile (like I was when I decided to try and google the issue).

nl-juntos-timon commented 2 years ago

A work-around: docker pull <image> before running the docker build. All subsequent builds will be fast.

I think there is something wrong with the caching...

mattwelke commented 2 years ago

A work-around: docker pull <image> before running the docker build. All subsequent builds will be fast.

I think there is something wrong with the caching...

This doesn't work. Once it's been a while since you pulled the image, it'll be slow again. And that initial "docker pull" that describe here to speed up the "docker build" will itself be slow, defeating the purpose.

nl-juntos-timon commented 2 years ago

This doesn't work. Once it's been a while since you pulled the image, it'll be slow again. And that initial "docker pull" that describe here to speed up the "docker build" will itself be slow, defeating the purpose.

I do many docker builds a day. The one manual pull from time to time, saves me a lot of time.

Of course it is still a work around, not a real solution.

mattwelke commented 2 years ago

Sounds like we're experiencing the bug differently then. For me, after 10 seconds of not pulling images, my next image pull of any image will have that 10-17 second delay. This is enough to make the workaround not work for me, because I would need to do a manual slow pull before every pull that's part of my dev loop.

withinboredom commented 2 years ago

Taking a look at a tcpdump in WSL indicates that it doesn't seem to be doing anything on the network. At least from a WSL client:

sudo tcpdump -B 999999 &
(setsid docker build .) < /dev/null |& cat # to emulate no tty output
withinboredom commented 2 years ago

Ok, I'm able to see that it takes ~5s for the client to send an ACK packet back after receiving some data, and then another 5s to send do something after a final ACK from the server. I'm going to try and MITM the encryption to see what these packets are.

withinboredom commented 2 years ago

FWIW, running a local registry mirror appears to reduce the amount of times this happens...

Create a config.yml with the following contents:

version: 0.1
log:
  fields:
    service: registry
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: :5000
  headers:
    X-Content-Type-Options: [nosniff]
health:
  storagedriver:
    enabled: true
    interval: 10s
    threshold: 3
proxy:
  remoteurl: https://registry-1.docker.io

Then start it up

docker run -p 5000:5000 --restart always -d -v /path/to/config.yml:/etc/docker/registry/config.yml registry:2

In the settings, go to Docker Engine and update the json document there to point to the registry. Here's my contents for example:

{
  "registry-mirrors": [
    "http://localhost:5000"
  ],
  "insecure-registries": [
    "http://localhost:5000"
  ],
  "debug": false,
  "experimental": false,
  "features": {
    "buildkit": true
  },
  "builder": {
    "gc": {
      "enabled": true,
      "defaultKeepStorage": "20GB"
    }
  }
}
mattwelke commented 2 years ago

Workaround

I was due for another reformat so I tried another workaround this time. Instead of using Docker Desktop, I just installed Docker inside WSL 2 the way one normally would install it on Ubuntu (https://docs.docker.com/engine/install/ubuntu/), and so far, none of the features I rely on for my day to day work (like being able to run a web server from within a container and view it in my browser on the Windows side) have broken. So I'm going to continue like this for a while. Coming from Ubuntu without WSL 2, this actually feels more natural to me anyways. Docker Desktop was extra complexity I had to learn in order to continue using Docker when I switched to Windows:

> which docker
/usr/bin/docker

Using kubectl

Docker Desktop provided other tools, like kubectl, but I just install those separately if I need them. For example, for kubectl, I can install it via APT or via the gcloud components manager. I used to do it via gcloud. Now, I'm just using APT.

~ > which kubectl
/usr/bin/kubectl

This has worked well for me so far. I'm able to pull images quickly even after waiting a while before the last pull:

> time docker pull mongo
Using default tag: latest
latest: Pulling from library/mongo
Digest: sha256:d78c7ace6822297a7e1c7076eb9a7560a81a6ef856ab8d9cde5d18438ca9e8bf
Status: Image is up to date for mongo:latest
docker.io/library/mongo:latest

real    0m0.426s
user    0m0.024s
sys     0m0.000s

Caveats

Automatic start at boot

Normally, I'd follow the post installation setup steps (https://docs.docker.com/engine/install/linux-postinstall/) to add myself to the docker group so that I don't need to use sudo and to enable the systemd services so that it starts automatically at boot. Adding my user to the group works fine. But even after running the commands to enable the services:

sudo systemctl enable docker.service
sudo systemctl enable containerd.service

...and then restarting to test it out, it doesn't work:

> docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I have to manually run a command to start the service. And it isn't a systemctl command:

> sudo systemctl start docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down

It's a service command:

> sudo service docker start
 * Starting Docker: docker                                                                                       [ OK ]

Then, until I shut my computer down, I can use Docker:

> docker ps -a
CONTAINER ID   IMAGE                         COMMAND   CREATED      STATUS                  PORTS     NAMES
b81b92fedd7c   mwelke/geolite2-web-service   "/main"   4 days ago   Exited (2) 4 days ago             my-geo

This works well for me because my concern here was having such a long pause for every single pull operation (including those done automatically by build operations), which interrupted my flow while I was trying to work. Having to run one command that always finishes instantly, at the start of my work day, works fine for me.

kind

Something I do only once in a while is test my app inside Kubernetes with a local Kubernetes cluster. On Ubuntu, I would use kind (https://github.com/kubernetes-sigs/kind) for this, but this doesn't seem to work with this approach:

~ > which kind
/home/matt/go/bin/kind
~ > kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.21.1) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
Details: ~ > which kind /home/matt/go/bin/kind ~ > kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.21.1) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✗ Starting control-plane 🕹️ ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1 Command Output: I0731 15:33:22.125068 218 initconfiguration.go:246] loading configuration from "/kind/kubeadm.conf" [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta2, Kind=JoinConfiguration [init] Using Kubernetes version: v1.21.1 [certs] Using certificateDir folder "/etc/kubernetes/pki" I0731 15:33:22.129428 218 certs.go:110] creating a new certificate authority for ca [certs] Generating "ca" certificate and key I0731 15:33:22.166742 218 certs.go:487] validating certificate period for ca certificate [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kind-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.17.0.2 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key I0731 15:33:22.436977 218 certs.go:110] creating a new certificate authority for front-proxy-ca [certs] Generating "front-proxy-ca" certificate and key I0731 15:33:22.544523 218 certs.go:487] validating certificate period for front-proxy-ca certificate [certs] Generating "front-proxy-client" certificate and key I0731 15:33:22.641238 218 certs.go:110] creating a new certificate authority for etcd-ca [certs] Generating "etcd/ca" certificate and key I0731 15:33:22.730793 218 certs.go:487] validating certificate period for etcd/ca certificate [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kind-control-plane localhost] and IPs [172.17.0.2 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key I0731 15:33:23.226139 218 certs.go:76] creating new public/private key files for signing service account users [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0731 15:33:23.247144 218 kubeconfig.go:101] creating kubeconfig file for admin.conf [kubeconfig] Writing "admin.conf" kubeconfig file I0731 15:33:23.284249 218 kubeconfig.go:101] creating kubeconfig file for kubelet.conf [kubeconfig] Writing "kubelet.conf" kubeconfig file I0731 15:33:23.327709 218 kubeconfig.go:101] creating kubeconfig file for controller-manager.conf [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0731 15:33:23.480341 218 kubeconfig.go:101] creating kubeconfig file for scheduler.conf [kubeconfig] Writing "scheduler.conf" kubeconfig file I0731 15:33:23.543985 218 kubelet.go:63] Stopping the kubelet [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" I0731 15:33:23.576243 218 manifests.go:96] [control-plane] getting StaticPodSpecs I0731 15:33:23.576450 218 certs.go:487] validating certificate period for CA certificate I0731 15:33:23.576498 218 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-apiserver" I0731 15:33:23.576509 218 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver" I0731 15:33:23.576511 218 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-apiserver" I0731 15:33:23.576513 218 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver" I0731 15:33:23.576521 218 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" I0731 15:33:23.580136 218 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml" I0731 15:33:23.580155 218 manifests.go:96] [control-plane] getting StaticPodSpecs I0731 15:33:23.580271 218 manifests.go:109] [control-plane] adding volume "ca-certs" for component "kube-controller-manager" I0731 15:33:23.580284 218 manifests.go:109] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager" I0731 15:33:23.580286 218 manifests.go:109] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager" I0731 15:33:23.580287 218 manifests.go:109] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager" I0731 15:33:23.580289 218 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager" I0731 15:33:23.580290 218 manifests.go:109] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager" I0731 15:33:23.580292 218 manifests.go:109] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager" I0731 15:33:23.580766 218 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml" I0731 15:33:23.580783 218 manifests.go:96] [control-plane] getting StaticPodSpecs [control-plane] Creating static Pod manifest for "kube-scheduler" I0731 15:33:23.580884 218 manifests.go:109] [control-plane] adding volume "kubeconfig" for component "kube-scheduler" I0731 15:33:23.581101 218 manifests.go:126] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0731 15:33:23.581449 218 local.go:74] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml" I0731 15:33:23.581466 218 waitcontrolplane.go:87] [wait-control-plane] Waiting for the API server to be healthy I0731 15:33:23.581836 218 loader.go:372] Config loaded from file: /etc/kubernetes/admin.conf [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0731 15:33:23.582796 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:24.083800 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:24.584140 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:25.083250 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:25.584236 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:26.084165 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:26.584240 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:27.084226 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:27.583347 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:28.083428 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:28.583363 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:29.083344 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:29.583245 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:30.084258 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:30.584265 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:31.084206 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:31.584309 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:32.083229 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:32.583275 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:33.084290 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:33.584316 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:34.084278 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:34.584207 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:35.084226 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:35.584173 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:36.084290 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:36.584257 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:37.084364 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:37.583333 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:38.083290 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:38.583295 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:39.083195 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:39.584191 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:40.084198 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:40.584188 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:41.084249 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:41.584189 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:42.084212 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:42.583284 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:43.084305 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:43.583446 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:44.083395 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:44.583241 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:45.084252 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:45.584271 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:46.084325 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:46.583374 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:47.083512 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:47.583579 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:48.083566 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:48.583523 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:49.083683 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:49.583583 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:50.083622 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:50.583685 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:51.083659 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:51.583784 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:52.083808 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:52.584138 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:53.084177 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:53.584177 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:54.084160 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:54.584160 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:55.084276 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:55.584243 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:56.084177 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:56.584179 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:57.084161 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:57.584253 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:58.084210 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:58.584210 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:59.084207 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:33:59.584177 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:00.084173 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:00.584135 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:01.084128 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:01.584108 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:02.084123 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:02.584200 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:03.084273 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] Initial timeout of 40s passed. I0731 15:34:03.583128 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. I0731 15:34:04.084166 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:04.584114 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:05.084077 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:05.584051 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:06.084029 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:06.583959 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:07.083936 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:07.584005 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:08.084135 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:08.583967 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. I0731 15:34:09.084059 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:09.584028 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:10.083995 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:10.583992 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:11.084153 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:11.584237 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:12.084218 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:12.583347 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:13.083461 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:13.583320 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:14.083312 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:14.583248 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:15.084333 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:15.583318 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:16.083220 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:16.584245 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:17.084290 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:17.583374 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:18.083455 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:18.583370 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. I0731 15:34:19.083507 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:19.583466 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:20.083539 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:20.583520 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:21.083431 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:21.583457 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:22.083349 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:22.583324 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:23.083220 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:23.584138 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:24.084041 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:24.584009 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:25.084178 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:25.584149 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:26.084140 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:26.584212 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:27.084264 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:27.583322 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:28.083263 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:28.584232 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:29.084170 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:29.584231 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:30.084277 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:30.584313 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:31.084324 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:31.583360 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:32.083250 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:32.583305 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:33.083255 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:33.584291 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:34.084324 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:34.583258 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:35.084359 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:35.583409 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:36.083370 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:36.583314 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:37.083167 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:37.584239 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:38.084368 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:38.583387 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. I0731 15:34:39.083681 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:39.583688 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:40.083555 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:40.583559 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:41.083378 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:41.583272 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:42.083300 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:42.583361 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:43.083372 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:43.583392 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:44.083457 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:44.583360 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:45.083267 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:45.583267 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:46.084234 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:46.584324 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:47.083214 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:47.583258 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:48.084353 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:48.584288 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:49.084610 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:49.583597 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:50.083554 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:50.583534 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:51.083673 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:51.583761 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:52.083792 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:52.583864 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:53.083902 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:53.584095 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:54.084144 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:54.584154 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:55.084221 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:55.584176 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:56.084293 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:56.584448 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:57.083384 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:57.583420 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:58.083362 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:58.583251 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:59.083310 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:34:59.583255 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:00.084286 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:00.584282 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:01.083329 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:01.583307 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:02.083223 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:02.583399 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:03.083397 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:03.583485 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:04.083698 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:04.583650 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:05.083590 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:05.583658 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:06.083597 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:06.583662 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:07.083687 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:07.583692 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:08.083849 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:08.583852 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:09.083890 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:09.584046 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:10.084163 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:10.584123 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:11.084248 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:11.584229 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:12.084225 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:12.583296 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:13.083248 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:13.584428 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:14.083401 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:14.583498 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:15.083485 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:15.583450 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:16.083471 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:16.583403 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:17.083464 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:17.583628 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:18.083579 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds I0731 15:35:18.583539 218 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s in 0 milliseconds [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet' Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI. Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause' Once you have found the failing container, you can inspect its logs with: - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID' couldn't initialize a Kubernetes cluster k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:114 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895 k8s.io/kubernetes/cmd/kubeadm/app.Run /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 main.main _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 runtime.main /usr/local/go/src/runtime/proc.go:225 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1371 error execution phase wait-control-plane k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207 k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958 k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895 k8s.io/kubernetes/cmd/kubeadm/app.Run /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50 main.main _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25 runtime.main /usr/local/go/src/runtime/proc.go:225 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1371

This isn't a deal breaker for me right now. It's very rare I'd need to test something in Kubernetes. Usually I just test it in isolation in a Docker container and then deploy it, assuming it'll interact with the other containers properly. Also, I can still use kind in automated tests in my CI/CD (I've tested it with GitHub Actions and it worked fine - https://github.com/mattwelke/kind-and-ow-on-github-actions-test/blob/main/.github/workflows/koga.yaml) so I can just use kind in my outer loop instead of inner loop.

withinboredom commented 2 years ago

That sounds really interesting @mattwelke, can you invoke Docker from the Windows side? I'd switch to this approach in a heartbeat if it allowed Intellij and friends to invoke Docker containers without issue. Although I suppose I could install an x-server and run Intellij from Ubuntu if all else fails. The 10-12 second delay is really grating on me now.

jakkaj commented 2 years ago

I too am seeing this issue lately. WSL2 building my devcontainer in VSCODE taking a long time to get started. Mine are taking over 70 seconds to get started.

mattwelke commented 2 years ago

@withinboredom No, in this case I'm not using Docker at all on the Windows side. It's just a program installed in Ubuntu in WSL 2. My use case for WSL 2 is to just have a self contained Linux development environment where I use Windows as a GUI. It's nice because Ubuntu as a GUI ended up giving me problems (I have a funky two monitor setup with atypical resolutions) and with Windows running, I can multitask and play games while coding, stuff like that. :P

chris-leach commented 2 years ago

I am also seeing this issue - a docker build of mine was repeatedly hanging on load metadata for docker.io... for 60+ seconds each time, even when all other build steps were cached.

Windows 10 19041.1110 WSL 2 Docker Desktop 3.5.2

DOCKER_BUILDKIT is not set in the shell from which I call docker, but the output of builds looks similar to what I would expect from enabling buildkit in earlier versions (did it get enabled by default at some point?).

The docker pull workaround mentioned above works for me - the pull also hangs, but once it completes then subsequent builds do not.

mattwelke commented 2 years ago

DOCKER_BUILDKIT is not set in the shell from which I call docker, but the output of builds looks similar to what I would expect from enabling buildkit in earlier versions (did it get enabled by default at some point?).

I experienced this too. At the same time as the bug appeared for me (in January 2021), I noticed the build output looked very different from normal. It had color in it and seemed to describe each step of the build in more detail, including live updating counters for the number of tenths of a second that had passed since that step of the build succeeded. My screenshots at the top of this issue from January show what the docker build output started to look like for me.

withinboredom commented 2 years ago

I'm also seeing random curl-lib requests take ~10s from containers (causing timeouts).

image

The next request will work fine. This only happens in docker-for-windows. I do not see any delays with dig or other tools.

withinboredom commented 2 years ago

Finally, found a solution...

I added this to my docker engine config, which fixed my curl dns woes:

  "dns": [ "8.8.8.8" ],

But it seems whatever settings buildkit is using, it's not respecting the engine's dns configuration.

However, we can use buildx to remove the delay I'm seeing, type these commands and try running some builds:

docker buildx install
INSTANCE=$(docker buildx create)
docker buildx use $INSTANCE
docker build (--load|--push) [options] <file>

In my particular case, it turns out that whatever docker for windows is using as a dns proxy sometimes gets hung up for quite awhile.