containerd / containerd

An open and reliable container runtime
https://containerd.io
Apache License 2.0
17.46k stars 3.45k forks source link

'failed to reserve container name' #4604

Closed sadortun closed 2 years ago

sadortun commented 4 years ago

Description

Hi!

We are running containerd on GKE with pretty much all defaults. A dozen nodes, and a few hundreds pods. Plenty of memory and disk free.

We started to have many pods fail due to failed to reserve container name error in the last week or so. I do not recall any specific changes to the cluster, or containers themselves.

Any help will be greatly appreciated!

Steps to reproduce the issue: I have no clue how to specifically reproduce this issue.

Cluster have nothing special, deployment is straightforward. The only thing that could be relevant is that our images are quite large, around 3Gb.

I got a few more details here : https://serverfault.com/questions/1036683/gke-context-deadline-exceeded-createcontainererror-and-failed-to-reserve-contai

Describe the results you received:

2020-10-07T08:01:45Z Successfully assigned default/apps-abcd-6b6cb5876b-nn9md to gke-bap-mtl-1-preemptible-e2-s4-e6a8ddb4-ng3v I 
2020-10-07T08:01:50Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:16:45Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:18:45Z Error: context deadline exceeded W 
2020-10-07T08:18:45Z Container image "redis:4.0-alpine" already present on machine I 
2020-10-07T08:18:53Z Created container redis I 
2020-10-07T08:18:53Z Started container redis I 
2020-10-07T08:18:53Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:02Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:02Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:03Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:20Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:20Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:21Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:34Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:34Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:35Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:44Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:19:44Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:19:54Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:08Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:08Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:20:18Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:30Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:20:30Z Error: failed to reserve container name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0": name "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0" is reserved for "8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f" W 
2020-10-07T08:21:19Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:26:35Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:31:36Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:36:26Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
2020-10-07T08:41:18Z Pulling image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" I 
I 2020-10-07T08:46:41Z Successfully pulled image "gcr.io/my/appImage:223c133ff631c41e1bc21a8b7d7554036da4fb4e" 

Describe the results you expected: Live an happy life, error free :)

Output of containerd --version:

containerd github.com/containerd/containerd 1.3.2 ff48f57fc83a8c44cf4ad5d672424a98ba37ded6

Any other relevant information:

windniw commented 3 years ago

It looks like there is a container with name web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0 and id 8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f in containerd. While kubelet want to create to a new one with name web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0, cri plugin failed on it.

Could you show docker ps -a or ctr c list

pfuhrmann commented 3 years ago

Did you manage to resolve this issue @sadortun? We are experiencing the same. Also on GKE with containerd runtime.

  Normal   Scheduled  2m21s               default-scheduler  Successfully assigned ***-77c979f8bf-px4v9 to gke-***-389a7e33-t1hl
  Warning  Failed     20s                 kubelet            Error: context deadline exceeded
  Normal   Pulled     7s (x3 over 2m20s)  kubelet            Container image "***" already present on machine
  Warning  Failed     7s (x2 over 19s)    kubelet            Error: failed to reserve container name ***-77c979f8bf-px4v9_***": name "***-77c979f8bf-px4v9_***" is reserved for "818fcfef09165d91ac8c86ed88714bb159a8358c3eca473ec07611a51d72b140"

We are deploying the same image to multiple deployments (30 - 40 pods) at the same time. No such issues with docker runtime.

Eventually, kubelet is able to resolve this issue without manual intervention, however, it is significantly slowing the deployment of new images during the release (extra 2-3 minutes to resolve name conflicts).

sadortun commented 3 years ago

Hi @pfuhrmann

We did investigate this quite deeply with GKE Dev team and we were not able to reproduce it.

That said, We are pretty convince the issue comes from one of the two following issue:

Unfortunately after a month of back and forth with GKE devs, we were not able to find the solution.

The good new is, for us, we refactored our application and were able to reduce the number of starting pods from about 20, down to 5. Since then, we had no issues.

You might also want to increase node boot drive size. It seems to help too.

kmarji commented 3 years ago

any update on this? did anybody manage to solve this? we are facing the same issue

chrisroat commented 3 years ago

We are also seeing the same issue, GKE with containerd. It does seem to be correlated with starting many pods at once.

Switching from cos_containerd back to cos (docker based) seems to have resolved the situation, at least in the short term.

kmarji commented 3 years ago

We are also seeing the same issue, GKE with containerd. It does seem to be correlated with starting many pods at once.

Switching from cos_containerd back to cos (docker based) seems to have resolved the situation, at least in the short term.

Same for us once we switched back to cos with docker everything worked

sadortun commented 3 years ago

Same for us once we switched back to cos with docker everything worked.

At the end we still had occasional issues and We also had to switch back to cos

mikebrow commented 3 years ago

jotting down some notes here, apologies if it's lengthy:

Let me try to explain/figure out the reason you got "failed to reserve container name" ..

Kubelet tried to create a container that it had already asked containerd to create at least once.. when containerd tried the first time it received a variable in the container create meta data named attempt and that variable held the default value 0 .. then containerd reserved the unique name for attempt 0 that you see in your log (see _0 at end of name) "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0"... something happened causing a context timeout between kubelet and containerd .. the kubelet context timeout value is configurable.. "--runtime-request-timeout duration Default: 2m0s" a 2min timeout could happen for any number of reasons.. an unusually long garbage collection a file system hiccup, locked files, deadlocks while waiting, some very expensive init operation occurring in the node for one of your other containers.. who knows? That's why we have/need recovery procedures.

What should have happened is kubelet should've incremented the attempt number (or at least that's how I see it from this side (the containerd side) of the CRI api, but kubelet did not increment the attempt number and further containerd was still trying to create the container from the first request.. or the create on the containerd side may even be finished at this point, it is possible the timeout only happened on the kubelet side and containerd continued finishing the create, possibly even attempting to return the success result. If containerd actually failed it would have deleted the reservation for that container id as the immediate thing after we reserve the id in containerd is to defer it's removal on any error in the create.. https://github.com/containerd/containerd/blob/master/pkg/cri/server/container_create.go#L65-L84

So ok.. skimming over the kubelet code.. I believe this is the code that decides what attempt number we are on? https://github.com/kubernetes/kubernetes/blame/master/pkg/kubelet/kuberuntime/kuberuntime_container.go#L173-L292

In my skim.. I think I see a window where kubelet will try attempt 0 a second time after the first create attempt fails with a context timeout. But I may be reading the code wrong? @dims @feiskyer @Random-Liu

CyberHippo commented 3 years ago

Bumped into this issue as well. Switching back to cos with docker.

jsoref commented 3 years ago

Fwiw, we're hitting this this week.

k8s 1.20.8-gke.900; containerd://1.4.3 > Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.8-gke.900", GitCommit:"28ab8501be88ea42e897ca8514d7cd0b436253d9", GitTreeState:"clean", BuildDate:"2021-06-30T09:23:36Z", GoVersion:"go1.15.13b5", Compiler:"gc", Platform:"linux/amd64"} ``` kubectl get nodes -o json | jq '.items[].status.nodeInfo.containerRuntimeVersion' |uniq "containerd://1.4.3" ```

In my case, the pod is owned by a (batch/v1)job, and the job by a (batch/v1beta1)cronjob.

The reserved for item only appears in the error, nothing else seems to know about it Using Google cloud logging, I can search: ``` "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" OR "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32" ``` w/ a search range of 2021-08-22 01:58:00.000 AM EDT..2021-08-22 02:03:00.000 AM EDT This is the _first_ hit: > ⚠️ Warning 2021-08-22 02:02:44.000 EDT > backup-test-db-1629612000-cz8ks "Error: failed to reserve container name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0": name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" is reserved for "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32"" And this is the _second_ hit: > 🌟 Default 2021-08-22 02:02:45.217 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:44.792364 1653 remote_runtime.go:227] CreateContainer in sandbox "c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0": name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" is reserved for "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32" There are additional hits, but they aren't exciting. For reference, this search (with the same time params) yields nothing: ``` ("backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" OR "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32") -"Attempt:0" "Attempt" ``` This search ``` ("backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" OR "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32") "Attempt:0" ``` yields two entries: ``` Default 2021-08-22 02:02:45.219 EDT gke-default-cluster-default-pool-c90133be-6xkd time="2021-08-22T06:02:44.792102443Z" level=error msg="CreateContainer within sandbox \"c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128\" for &ContainerMetadata{Name:backup-db,Attempt:0,} failed" error="failed to reserve container name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\": name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\" is reserved for \"188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32\"" Default 2021-08-22 02:02:56.899 EDT gke-default-cluster-default-pool-c90133be-6xkd time="2021-08-22T06:02:56.899853062Z" level=error msg="CreateContainer within sandbox \"c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128\" for &ContainerMetadata{Name:backup-db,Attempt:0,} failed" error="failed to reserve container name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\": name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\" is reserved for \"188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32\"" ``` (The are additional hits if i extend the time window forward, but as they appear to be identical, other than the timestamp, I don't see any value in repeating them.)
Relevant log events The best query I've found is: ``` resource.labels.cluster_name="default-cluster" "backup-test-db-1629612000-cz8ks_test" ``` (The former is to limit which part of GCloud to search, and the latter is the search.) ``` Default 2021-08-22 02:00:03.259 EDT gke-default-cluster-default-pool-c90133be-6xkd I0822 06:00:03.259074 1653 kubelet.go:1916] SyncLoop (ADD, "api"): "backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)" Default 2021-08-22 02:00:03.569 EDT gke-default-cluster-default-pool-c90133be-6xkd I0822 06:00:03.569830 1653 kuberuntime_manager.go:445] No sandbox for pod "backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)" can be found. Need to start a new one Default 2021-08-22 02:00:43.213 EDT gke-default-cluster-default-pool-c90133be-6xkd I0822 06:00:43.213133 1653 kubelet.go:1954] SyncLoop (PLEG): "backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)", event: &pleg.PodLifecycleEvent{ID:"efe343a0-5641-427c-8a65-1b7dc939432d", Type:"ContainerStarted", Data:"c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128"} 2021-08-22 02:02:45.217 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:44.792364 1653 remote_runtime.go:227] CreateContainer in sandbox "c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0": name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" is reserved for "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32" Default 2021-08-22 02:02:45.217 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:42.840645 1653 pod_workers.go:191] Error syncing pod efe343a0-5641-427c-8a65-1b7dc939432d ("backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)"), skipping: failed to "StartContainer" for "backup-db" with CreateContainerError: "context deadline exceeded" Default 2021-08-22 02:02:45.217 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:44.792589 1653 pod_workers.go:191] Error syncing pod efe343a0-5641-427c-8a65-1b7dc939432d ("backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)"), skipping: failed to "StartContainer" for "backup-db" with CreateContainerError: "failed to reserve container name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\": name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\" is reserved for \"188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32\"" Default 2021-08-22 02:02:56.900 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:56.900057 1653 remote_runtime.go:227] CreateContainer in sandbox "c9f8cf0e4fc280b632bf8f4365dccf34f213c5aa4636a4424aab68940d579128" from runtime service failed: rpc error: code = Unknown desc = failed to reserve container name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0": name "backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0" is reserved for "188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32" Default 2021-08-22 02:02:56.900 EDT gke-default-cluster-default-pool-c90133be-6xkd E0822 06:02:56.900309 1653 pod_workers.go:191] Error syncing pod efe343a0-5641-427c-8a65-1b7dc939432d ("backup-test-db-1629612000-cz8ks_test(efe343a0-5641-427c-8a65-1b7dc939432d)"), skipping: failed to "StartContainer" for "backup-db" with CreateContainerError: "failed to reserve container name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\": name \"backup-db_backup-test-db-1629612000-cz8ks_test_efe343a0-5641-427c-8a65-1b7dc939432d_0\" is reserved for \"188e7647efe4e1243a4fb3529c69f95c83e3876d4989ba94a409c652f99a8f32\"" ```
matti commented 3 years ago

same, switching back to docker

bduclaux commented 3 years ago

Same issue here, with gke version 1.20.9-gke.701 .

mikebrow commented 3 years ago

@dims @feiskyer @Random-Liu @ehashman

FYI.. kubelet is trying to create the same container twice using with the same start count "0", and also does not respond to the "failed to reserve container name" error message. Thoughts?

donnyv12 commented 3 years ago

Having this issue as well with GKE. Appreciate the detailed research folks have posted here.

ehashman commented 3 years ago

@dims @feiskyer @Random-Liu @ehashman

FYI.. kubelet is trying to create the same container twice using with the same start count "0", and also does not respond to the "failed to reserve container name" error message. Thoughts?

Is there a reproducer? Can you please file an issue against k/k?

bduclaux commented 3 years ago

I confirm that moving back to docker solves the problem:

gcloud container clusters upgrade mycluster --image-type cos --node-pool mynodepool --zone myzone

matti commented 3 years ago

This also happens with UBUNTU_CONTAINERD, not just COS_CONTAINERD

qiutongs commented 3 years ago

@mikebrow I investigated a reported issue in k/k before https://github.com/kubernetes/kubernetes/issues/94085

My summary is that kubelet has correct logic for incrementing the restart number which is set to "current_restart + 1". See this kubelet code.

  1. If the CreateContainer request eventually succeeds on containerd side, kubelet will see it and increment the restart on the next iteration of SyncPod. The pod will be eventually ready.
  2. If the CreateContainer request eventually fails on containerd side, containerd should release the name. On next iteration of Kubelet SyncPod, it shouldn't see "failed to reserve container name" error.
  3. If the CreateContainer request is stuck on containerd side, the name is never released. Then kubelet will keep seeing "failed to reserve container name" error.
dungdm93 commented 3 years ago

Hello @qiutongs, my k8s cluster used to run Spark jobs, and recently we got alot of CreateContainerError. image As your report, I find out that in the first time request, containerd create container but it stuck on Created state. image I also check logs of both containerd, and kubelet but did not find any more details. Do you have any guide/instruction or recommend to help me deep dive further. Thanks

navnitDevOps commented 2 years ago

does anyone has any other solution to the issue except moving back to cos with docker since eventually we have to upgrade our gke-cluster and we might again face this issue in future

bduclaux commented 2 years ago

Unfortunately, the only working solution is to move back to cos with docker. Amazing to see that this critical bug has been opened more than one year go, and still no fix.

sadortun commented 2 years ago

@navnitDevOps

does anyone has any other solution

Have a look at how many containers are starting at the same time, and how much memory each of them take at startup. Make sure you have plenty left.

In our case, even if we were short on RAM, having more free RAM seems to help, but we still had to revert to COS.

eduardoaw commented 2 years ago

Same problem here

containerd github.com/containerd/containerd 1.4.6 d71fcd7d8303cbf684402823e425e9dd2e99285d

Amazon EKS 1.21

elucidsoft commented 2 years ago

Just had this happen to me on GKE.

euqen commented 2 years ago

Happened to me too on GKE

meln5674 commented 2 years ago

Ran into this on KinD (kindest/node:v1.21, single node) when disk IO was higher than expected during tests+, which I suspect was caused/exacerbated by creating too many pods at once. Creating fewer pods at once still didn't work at first, but restarting containerd and kubelet (in that order) caused those few pods to come up as expected. I was then able to slowly scale all of the test pods back up to their expected replica counts without a problem. My guess is that once this error occurs, kubelet and containerd are "stuck" but restarting them appears to "un-stuck" them. No idea if this has any applicability to an actual production environment.

matti commented 2 years ago

I only reproduce this issue (always) when scaling multiple heavy (both in terms of image size and the processes launched) pods.

zonnie commented 2 years ago

Just happened on GKE...not the first time... I'm running k8s 1.21 so I can't even go back to cos... I also have no clear of "getting out" of this so this can wrack havoc in production - any ideas ?

sachinar commented 2 years ago

Same issue we also got.

Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  3m54s               default-scheduler  Successfully assigned search/document-ingestion-6b7587d4b6-jn5t7 to gke-search-s-search-s-d5f7ec3d-9n9v
  Warning  Failed     107s                kubelet            Error: context deadline exceeded
  Normal   Pulled     7s (x9 over 3m47s)  kubelet            Container image "docker.io/istio/proxy_init:1.1.17" already present on machine
  Warning  Failed     7s (x8 over 106s)   kubelet            Error: failed to reserve container name "istio-init_document-ingestion-6b7587d4b6-jn5t7_search_7e2bda9a-ba49-48b2-8795-195f32542fc0_0": name "istio-init_document-ingestion-6b7587d4b6-jn5t7_search_7e2bda9a-ba49-48b2-8795-195f32542fc0_0" is reserved for "2623803ef8fab2dcdd68141abf47e89827c921d4d1451002724b2c00ec7eb32f"
qiutongs commented 2 years ago

Here is my current analaysis. I will keep updating this comment.

Summary

"failed to reserve container name" error is returned by containerd CRI if there is an in-flight CreateContainer request reserving the same container name (like below). T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side) T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry) T3: 2nd CreateContainer request returns "failed to reserve container name XYZ" error T4: 1st CreateContainer request is still in-flight…

It simply indicates the CreateContainer request is slower than configurable --runtime-request-timeout (default 2min).

Based on my observation and investigation so far, I found following facts.

  1. All the issues originate slow disk operations (e.g. disk throttle on GKE)
  2. The container and pod will be created successfully, given sufficient time. (Note, this assumes restartPolicy:Always or restartPolicy:OnFailure in PodSpec. Yes, restartPolicy affects the behavior of container creation.)

Mitigation

  1. If pods are failed, consider to use restartPolicy:Always or restartPolicy:OnFailure in PodSpec
  2. Increase the boot disk IOPS (e.g. upgrade disk type or increase disk size)
  3. Switch back to docker

Theory 1

Expected symptom: some pods are up and generate heavy IO but other are not

Docker has a similar mechanism of "reserving container name" to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.

err = docker.CreateContainer
if (err == container name conflict) {
    removeErr = docker.RemoveContainer
    if (removeErr == nil) {
        return err
    } else {
        if (removeErr == "Container not found") {
            randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
            return docker.CreateContainer
        } else {
            return err
        }
    } 
}

https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284

In my experiment, it keeps hitting the case of "randomize the container name". It indicates every Kubelet retry will try to create a new container name in dockershim. However, containerd is stick to single containerd name where all the subsequent retrys are doomed to fail if the initial request is in-flight.

In conclusion, dockershim has a more aggressive way of retry. Therefore, docker has higher chance of creating container successfully in a much faster way than containerd.

Theory 2

Expected symptom: all the pods are not up.

Containerd has worse pull-image control than docker. For example, it may pull too many images in parallel which generates more disk IO.

(Not found any code reference yet)

Reproducing the Problem

Unfortunately, I haven't found a way to reproduce that docker is consistently superior to containerd.

Experiment for Theory 1

Setup:

Execution:

Expected Result:

Actual Result:

Experiment for Theory 2

Setup:

Execution:

Expected Result

Actual Result

Need Help from Community

  1. Find a way to reproduce that docker is superior to containerd in the same environment
  2. Answer the following questionnaire.

Misc

danielfoehrKn commented 2 years ago

We also just hit this one across many clusters with an I/O overload on the root disk (symptom: high I/O wait times and little I/O idle times.) This was just a secondary issue resulting from an initially timed-out request to containerd CRI shim due to I/O slowness. However, this bug is a problem as it slows down the time until the workload starts eventually. Initially timed out requests continue to fail due to failed to reserve container name even though I/O performance has improved in the meanwhile.

We had to increase the IOPS for the root disks in order to prevent running into DeadlineExceeded errors during pod creation visible in the kubelet logs which result in the occurrence of the bug described in this issue.

dungdm93 commented 2 years ago

@danielfoehrKn If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.

danielfoehrKn commented 2 years ago

If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.

At least for us, the disk was most likely the reason why the initial request to the CRI timed out. After it timed out, then the pod could not be created because each subsequent API call by the kubelet ran into the failed to reserve container name problem.

We have not tested it on docker (cannot switch either). However, if dockerd/docker-shim does not have the same problem due to keeping some false "state", then subsequent requests by the kubelet to create the PodSandbox could work given the reason for the initial DeadlineExceeded error from the container runtime is not present any more (such as disk I/O overload). This could be the case when many I/O heavy pods are scheduled to the node at the same time all causing image pulls, doing initialization work ....

mikebrow commented 2 years ago

@danielfoehrKn If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.

Per CRI, containerd is receiving and processing requests in parallel. Kubelet has a couple bugs/issues in that it's making to many parallel requests even under disk pressure, then when a timeout occurs on it's side, it fails to recognize the problem was of it's own creation, presumes asking the CRI to do the same thing again will make it work this time even though it's still doing what was previously requested, and containerd reports the error. Dockershim path with kubelet down through the docker api then through containerd (vs directly to containerd) has more serialization and different code in the now deleted docker-shim code (even though containerd was still in the path), thus producing different behavior. Sometimes more serialization is faster (such as when under to much resource pressure resulting in thrashing of resources, for example memory swapping to disk while creating snapshots and new containers loading up and garbage collecting memory used for older requests... etc.).

Kubelet should be modified to recognize the timeout situation and avoid subsequent duplicate requests... or we can modify the CRI api to serialize service requests (I do not recommend we go this way) or we can change the API from parallel requests with client side timeouts to subscription requests with a first ack response and subsequent status change event responses. We could also have a change to the CRI api to request (by policy) serialization (through queuing) of requests when under pressure... or serialization by "failing" requests when currently processing parallel requests when under pressure (also not recommended). If we want us to "manage" requests at the CRI level we can do so but we're going to want to talk about node management policies.

jsturtevant commented 2 years ago

I have also seen something similiar with the Windows sandbox creation in the Windows e2e tests recently. I looked for an issue to track possible changes to kubelet and couldn't find one so created: https://github.com/kubernetes/kubernetes/issues/107561

qiutongs commented 2 years ago

Dockershim path with kubelet down through the docker api then through containerd (vs directly to containerd) has more serialization and different code in the now deleted docker-shim code (even though containerd was still in the path), thus producing different behavior. Sometimes more serialization is faster

@mikebrow Do you have more details to share? More specifically, in terms of the rate control container runtime requests, how dockershim and containerd CRI plugin behave differently?

kubino148 commented 2 years ago

Happened to me, it's really a serious bug when you are running your gitlab ci/cd runners in containerd based k8s because some pipelines are designed to run multiple containers in parallel and this bug happens very often. Is going back to docker really the only option here?

matti commented 2 years ago

Is going back to docker really the only option here?

yes, now it looks like it.

fuweid commented 2 years ago

Hi, @matti and @kubino148 and @sadortun and all subscribers, could you mind to provide the goroutine stack of containerd when you see the error? Thanks.

kill -USR1 $(pidof containerd) will trigger the dump and check containerd log to get stack.

qiutongs commented 2 years ago

Amended Theory 1

(See the original theory 1 in https://github.com/containerd/containerd/issues/4604#issuecomment-1006013231)

Docker has a similar mechanism of "reserving container name" to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.

err = docker.CreateContainer
if (err == container name conflict) {
    removeErr = docker.RemoveContainer
    if (removeErr == nil) {
        return err
    } else {
        if (removeErr == "Container not found") {
            randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
            return docker.CreateContainer
        } else {
            return err
        }
    } 
}

https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284

In fact, this difference of retry leads to significantly different CRI rates between dockershim and containerd. In containerd, the CreateContainer request comes about every 10s-20s (See example in https://github.com/containerd/containerd/issues/4604#issue-716346199). But in dockershim case, theCreateContainer request comes about every 2min. This is because the requests of hitting "failed to reserve name" are fast in containerd while the requests can take 2min with a new container name in dockershim. This applies to RunPodSandbox as well. Therefore, it is a fact that the load of CRI requests in containerd is 10x of the load in dockershim. And I infer this makes the node further overloaded.

This theory echos with a similar bug solved in CRI-O - https://bugzilla.redhat.com/show_bug.cgi?id=1785399., in which the solution says "Now, when systems are under load, CRI-O does everything it can to slow down the Kubelet and reduce load on the system."

I believe our direction is also to slow down Kubelet sending too many requests. This might be aligned with Mike's comment- https://github.com/containerd/containerd/issues/4604#issuecomment-1013268187

chrisroat commented 2 years ago

Is going back to docker really the only option here?

yes, now it looks like it.

On GCP, only for a little while longer, though. Just got an email:

[Action Required] Migrate to Containerd node images before GKE v1.24

Support for Docker as a container runtime on Kubernetes nodes will be removed from OSS Kubernetes and GKE starting with v1.24. Please migrate your GKE workloads to Containerd as soon as possible.

jsoref commented 2 years ago

If I'm reading https://cloud.google.com/kubernetes-engine/docs/release-schedule and https://cloud.google.com/kubernetes-engine/versioning#lifecycle correctly, 1.23 will be supported through approximately 2023-06, which is a bit in the future. (This would require you to have created a static cluster as opposed to a regular upgrading cluster.)

Personally, I'd rather figure out what's wrong here and get it fixed (but I'm currently snowed in under a bunch of other tasks, sorry).

matti commented 2 years ago

from GKE email:

Most user workloads do not have dependencies on the container runtime. If you use a node image based on Dockershim, please migrate your GKE workloads to a Containerd node image as soon as possible.

How can we get someone from GKE on this thread?

qiutongs commented 2 years ago

How can we get someone from GKE on this thread?

Hi Matti, I am from GKE. We are fully aware of this issue and are prioritizing it.

fuweid commented 2 years ago

That said, We are pretty convince the issue comes from one of the two following issue:

Disk IO is to high and containerd timeout because of that Starting 10-20+ pods at the same time on a single node cause a memory usage spike and at some point containerd process get killed.

@sadortun I was trying to reproduce it with high disk IO. But sorry, I didn't reproduce it. :(

If the CreateContainer doesn't return, it maybe hang in two syscalls:

Docker uses lazy-umount which might hide the problem. If someone sees this issue, please provide more details like

Thanks

sadortun commented 2 years ago

@fuweid

Thanks for your time on this issue.

Unfortunately, I did stop using COS back in 2020 after we could not find a solution.

I'm 97% sure we were using overlayfs and as for the rest I have no way to find this historical data.

Sorry about that.

fuweid commented 2 years ago

@sadortun I file pr to enhance this. https://github.com/containerd/containerd/pull/6478#issuecomment-1022799322

No sure that what different between docker and containerd in GKE. sorry about that.

derekperkins commented 2 years ago

We're on GKE 1.21.6-gke1500 and we've been seeing this problem for the last 1-2 months

qiutongs commented 2 years ago

@sadortun I file pr to enhance this. #6478 (comment)

I got some good results showing this patch improves the latency of "CreateContainer".

Please note this is based on a couple of experiments, not ample data set. stress-ng doesn't produce stable IOPS so the disk state cannot be the exact same in two cases.

qiutongs commented 2 years ago

Summary (2022/02)

"failed to reserve container name" error is returned by containerd CRI if there is an in-flight CreateContainer request reserving the same container name (like below). T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side) T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry) T3: 2nd CreateContainer request returns "failed to reserve container name XYZ" error T4: 1st CreateContainer request is still in-flight…

Don't panic. Given sufficient time, the container and pod will be created successfully, as long as you are using restartPolicy:Always or restartPolicy:OnFailure in PodSpec.

Root Cause and Fix

Slow disk operations((e.g. disk throttle on GKE) are the culprit. What generates lots of disk IO can come from a number of factors: user's disk-heavy workload, big images pulling and containerd CRI implementation.

An unnecessary sync-fs operation was found as part of CreateContainer stack. It is the where CreateContainer gets stuck. The sync-fs is got rid of in https://github.com/containerd/containerd/pull/6478. Not only it makes CreateContainer return faster, but it reduces disk IO generated by containerd.

Please note there are perhaps other undiscovered reason contributing to this problem.

Mitigation

  1. If pods are failed, consider to use restartPolicy:Always or restartPolicy:OnFailure in PodSpec
  2. Increase the boot disk IOPS (e.g. upgrade disk type or increase disk size)
  3. Upgrade containerd with this patch https://github.com/containerd/containerd/pull/6478 which will be available in 1.6+ and 1.5.X(backport working in progress)