Closed sadortun closed 2 years ago
It looks like there is a container with name web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0 and id 8b21a9870e3ecc09bbb92da2036bd3c9b35f5829873d80cfbd14dc1e1827923f
in containerd. While kubelet want to create to a new one with name web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0, cri plugin failed on it.
Could you show docker ps -a
or ctr c list
Did you manage to resolve this issue @sadortun? We are experiencing the same. Also on GKE with containerd runtime.
Normal Scheduled 2m21s default-scheduler Successfully assigned ***-77c979f8bf-px4v9 to gke-***-389a7e33-t1hl
Warning Failed 20s kubelet Error: context deadline exceeded
Normal Pulled 7s (x3 over 2m20s) kubelet Container image "***" already present on machine
Warning Failed 7s (x2 over 19s) kubelet Error: failed to reserve container name ***-77c979f8bf-px4v9_***": name "***-77c979f8bf-px4v9_***" is reserved for "818fcfef09165d91ac8c86ed88714bb159a8358c3eca473ec07611a51d72b140"
We are deploying the same image to multiple deployments (30 - 40 pods) at the same time. No such issues with docker runtime.
Eventually, kubelet is able to resolve this issue without manual intervention, however, it is significantly slowing the deployment of new images during the release (extra 2-3 minutes to resolve name conflicts).
Hi @pfuhrmann
We did investigate this quite deeply with GKE Dev team and we were not able to reproduce it.
That said, We are pretty convince the issue comes from one of the two following issue:
Unfortunately after a month of back and forth with GKE devs, we were not able to find the solution.
The good new is, for us, we refactored our application and were able to reduce the number of starting pods from about 20, down to 5. Since then, we had no issues.
You might also want to increase node boot drive size. It seems to help too.
any update on this? did anybody manage to solve this? we are facing the same issue
We are also seeing the same issue, GKE with containerd. It does seem to be correlated with starting many pods at once.
Switching from cos_containerd back to cos (docker based) seems to have resolved the situation, at least in the short term.
We are also seeing the same issue, GKE with containerd. It does seem to be correlated with starting many pods at once.
Switching from cos_containerd back to cos (docker based) seems to have resolved the situation, at least in the short term.
Same for us once we switched back to cos with docker everything worked
Same for us once we switched back to cos with docker everything worked.
At the end we still had occasional issues and We also had to switch back to cos
jotting down some notes here, apologies if it's lengthy:
Let me try to explain/figure out the reason you got "failed to reserve container name" ..
Kubelet tried to create a container that it had already asked containerd to create at least once.. when containerd tried the first time it received a variable in the container create meta data named attempt
and that variable held the default value 0
.. then containerd reserved the unique name for attempt 0 that you see in your log (see _0 at end of name) "web_apps-abcd-6b6cb5876b-nn9md_default_3dc00fd6-0c5d-42be-bec8-e4f6cad616da_0"
... something happened causing a context timeout between kubelet and containerd .. the kubelet context timeout value is configurable.. "--runtime-request-timeout duration Default: 2m0s
" a 2min timeout could happen for any number of reasons.. an unusually long garbage collection a file system hiccup, locked files, deadlocks while waiting, some very expensive init operation occurring in the node for one of your other containers.. who knows? That's why we have/need recovery procedures.
What should have happened is kubelet should've incremented the attempt
number (or at least that's how I see it from this side (the containerd side) of the CRI api, but kubelet did not increment the attempt number and further containerd was still trying to create the container from the first request.. or the create on the containerd side may even be finished at this point, it is possible the timeout only happened on the kubelet side and containerd continued finishing the create, possibly even attempting to return the success result. If containerd actually failed it would have deleted the reservation for that container id as the immediate thing after we reserve the id in containerd is to defer it's removal on any error in the create.. https://github.com/containerd/containerd/blob/master/pkg/cri/server/container_create.go#L65-L84
So ok.. skimming over the kubelet code.. I believe this is the code that decides what attempt number we are on? https://github.com/kubernetes/kubernetes/blame/master/pkg/kubelet/kuberuntime/kuberuntime_container.go#L173-L292
In my skim.. I think I see a window where kubelet will try attempt 0 a second time after the first create attempt fails with a context timeout. But I may be reading the code wrong? @dims @feiskyer @Random-Liu
Bumped into this issue as well. Switching back to cos with docker.
Fwiw, we're hitting this this week.
In my case, the pod is owned by a (batch/v1)job, and the job by a (batch/v1beta1)cronjob.
same, switching back to docker
Same issue here, with gke version 1.20.9-gke.701 .
@dims @feiskyer @Random-Liu @ehashman
FYI.. kubelet is trying to create the same container twice using with the same start count "0", and also does not respond to the "failed to reserve container name" error message. Thoughts?
Having this issue as well with GKE. Appreciate the detailed research folks have posted here.
@dims @feiskyer @Random-Liu @ehashman
FYI.. kubelet is trying to create the same container twice using with the same start count "0", and also does not respond to the "failed to reserve container name" error message. Thoughts?
Is there a reproducer? Can you please file an issue against k/k?
I confirm that moving back to docker solves the problem:
gcloud container clusters upgrade mycluster --image-type cos --node-pool mynodepool --zone myzone
This also happens with UBUNTU_CONTAINERD, not just COS_CONTAINERD
@mikebrow I investigated a reported issue in k/k before https://github.com/kubernetes/kubernetes/issues/94085
My summary is that kubelet has correct logic for incrementing the restart number which is set to "current_restart + 1". See this kubelet code.
CreateContainer
request eventually succeeds on containerd side, kubelet will see it and increment the restart on the next iteration of SyncPod
. The pod will be eventually ready.CreateContainer
request eventually fails on containerd side, containerd should release the name. On next iteration of Kubelet SyncPod
, it shouldn't see "failed to reserve container name" error.CreateContainer
request is stuck on containerd side, the name is never released. Then kubelet will keep seeing "failed to reserve container name" error.Hello @qiutongs, my k8s cluster used to run Spark jobs, and recently we got alot of CreateContainerError
.
As your report, I find out that in the first time request, containerd create container but it stuck on Created
state.
I also check logs of both containerd, and kubelet but did not find any more details.
Do you have any guide/instruction or recommend to help me deep dive further. Thanks
does anyone has any other solution to the issue except moving back to cos with docker since eventually we have to upgrade our gke-cluster and we might again face this issue in future
Unfortunately, the only working solution is to move back to cos with docker. Amazing to see that this critical bug has been opened more than one year go, and still no fix.
@navnitDevOps
does anyone has any other solution
Have a look at how many containers are starting at the same time, and how much memory each of them take at startup. Make sure you have plenty left.
In our case, even if we were short on RAM, having more free RAM seems to help, but we still had to revert to COS.
Same problem here
containerd github.com/containerd/containerd 1.4.6 d71fcd7d8303cbf684402823e425e9dd2e99285d
Amazon EKS 1.21
Just had this happen to me on GKE.
Happened to me too on GKE
Ran into this on KinD (kindest/node:v1.21, single node) when disk IO was higher than expected during tests+, which I suspect was caused/exacerbated by creating too many pods at once. Creating fewer pods at once still didn't work at first, but restarting containerd and kubelet (in that order) caused those few pods to come up as expected. I was then able to slowly scale all of the test pods back up to their expected replica counts without a problem. My guess is that once this error occurs, kubelet and containerd are "stuck" but restarting them appears to "un-stuck" them. No idea if this has any applicability to an actual production environment.
I only reproduce this issue (always) when scaling multiple heavy (both in terms of image size and the processes launched) pods.
Just happened on GKE...not the first time...
I'm running k8s 1.21
so I can't even go back to cos
...
I also have no clear of "getting out" of this so this can wrack havoc in production - any ideas ?
Same issue we also got.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m54s default-scheduler Successfully assigned search/document-ingestion-6b7587d4b6-jn5t7 to gke-search-s-search-s-d5f7ec3d-9n9v
Warning Failed 107s kubelet Error: context deadline exceeded
Normal Pulled 7s (x9 over 3m47s) kubelet Container image "docker.io/istio/proxy_init:1.1.17" already present on machine
Warning Failed 7s (x8 over 106s) kubelet Error: failed to reserve container name "istio-init_document-ingestion-6b7587d4b6-jn5t7_search_7e2bda9a-ba49-48b2-8795-195f32542fc0_0": name "istio-init_document-ingestion-6b7587d4b6-jn5t7_search_7e2bda9a-ba49-48b2-8795-195f32542fc0_0" is reserved for "2623803ef8fab2dcdd68141abf47e89827c921d4d1451002724b2c00ec7eb32f"
Here is my current analaysis. I will keep updating this comment.
"failed to reserve container name" error is returned by containerd CRI if there is an in-flight CreateContainer
request reserving the same container name (like below).
T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side)
T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry)
T3: 2nd CreateContainer request returns "failed to reserve container name XYZ" error
T4: 1st CreateContainer request is still in-flight…
It simply indicates the CreateContainer request is slower than configurable --runtime-request-timeout
(default 2min).
Based on my observation and investigation so far, I found following facts.
restartPolicy:Always
or restartPolicy:OnFailure
in PodSpec. Yes, restartPolicy affects the behavior of container creation.)restartPolicy:Always
or restartPolicy:OnFailure
in PodSpecExpected symptom: some pods are up and generate heavy IO but other are not
Docker has a similar mechanism of "reserving container name" to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.
err = docker.CreateContainer
if (err == container name conflict) {
removeErr = docker.RemoveContainer
if (removeErr == nil) {
return err
} else {
if (removeErr == "Container not found") {
randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
return docker.CreateContainer
} else {
return err
}
}
}
https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284
In my experiment, it keeps hitting the case of "randomize the container name". It indicates every Kubelet retry will try to create a new container name in dockershim. However, containerd is stick to single containerd name where all the subsequent retrys are doomed to fail if the initial request is in-flight.
In conclusion, dockershim has a more aggressive way of retry. Therefore, docker has higher chance of creating container successfully in a much faster way than containerd.
Expected symptom: all the pods are not up.
Containerd has worse pull-image control than docker. For example, it may pull too many images in parallel which generates more disk IO.
(Not found any code reference yet)
Unfortunately, I haven't found a way to reproduce that docker is consistently superior to containerd.
Setup:
stress-ng
tool.Execution:
Expected Result:
Actual Result:
Setup:
Execution:
alpine ubuntu python busybox redis node mysql nginx httpd mongo memcached postgres mariadb wordpress influxdb consul rabbitmq debian amazonlinux cassandra
Expected Result
Actual Result
We also just hit this one across many clusters with an I/O overload on the root disk (symptom: high I/O wait times and little I/O idle times.)
This was just a secondary issue resulting from an initially timed-out request to containerd CRI shim due to I/O slowness.
However, this bug is a problem as it slows down the time until the workload starts eventually. Initially timed out requests continue to fail due to failed to reserve container name
even though I/O performance has improved in the meanwhile.
We had to increase the IOPS for the root disks in order to prevent running into DeadlineExceeded
errors during pod creation visible in the kubelet logs which result in the occurrence of the bug described in this issue.
@danielfoehrKn If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.
If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.
At least for us, the disk was most likely the reason why the initial request to the CRI timed out.
After it timed out, then the pod could not be created because each subsequent API call by the kubelet ran into the failed to reserve container name
problem.
We have not tested it on docker (cannot switch either). However, if dockerd/docker-shim does not have the same problem due to keeping some false "state", then subsequent requests by the kubelet to create the PodSandbox could work given the reason for the initial DeadlineExceeded
error from the container runtime is not present any more (such as disk I/O overload).
This could be the case when many I/O heavy pods are scheduled to the node at the same time all causing image pulls, doing initialization work ....
@danielfoehrKn If the root cause is because of slow disk, I wonder why docker is not suffer from the same issue.
Per CRI, containerd is receiving and processing requests in parallel. Kubelet has a couple bugs/issues in that it's making to many parallel requests even under disk pressure, then when a timeout occurs on it's side, it fails to recognize the problem was of it's own creation, presumes asking the CRI to do the same thing again will make it work this time even though it's still doing what was previously requested, and containerd reports the error. Dockershim path with kubelet down through the docker api then through containerd (vs directly to containerd) has more serialization and different code in the now deleted docker-shim code (even though containerd was still in the path), thus producing different behavior. Sometimes more serialization is faster (such as when under to much resource pressure resulting in thrashing of resources, for example memory swapping to disk while creating snapshots and new containers loading up and garbage collecting memory used for older requests... etc.).
Kubelet should be modified to recognize the timeout situation and avoid subsequent duplicate requests... or we can modify the CRI api to serialize service requests (I do not recommend we go this way) or we can change the API from parallel requests with client side timeouts to subscription requests with a first ack response and subsequent status change event responses. We could also have a change to the CRI api to request (by policy) serialization (through queuing) of requests when under pressure... or serialization by "failing" requests when currently processing parallel requests when under pressure (also not recommended). If we want us to "manage" requests at the CRI level we can do so but we're going to want to talk about node management policies.
I have also seen something similiar with the Windows sandbox creation in the Windows e2e tests recently. I looked for an issue to track possible changes to kubelet and couldn't find one so created: https://github.com/kubernetes/kubernetes/issues/107561
Dockershim path with kubelet down through the docker api then through containerd (vs directly to containerd) has more serialization and different code in the now deleted docker-shim code (even though containerd was still in the path), thus producing different behavior. Sometimes more serialization is faster
@mikebrow Do you have more details to share? More specifically, in terms of the rate control container runtime requests, how dockershim and containerd CRI plugin behave differently?
Happened to me, it's really a serious bug when you are running your gitlab ci/cd runners in containerd based k8s because some pipelines are designed to run multiple containers in parallel and this bug happens very often. Is going back to docker really the only option here?
Is going back to docker really the only option here?
yes, now it looks like it.
Hi, @matti and @kubino148 and @sadortun and all subscribers, could you mind to provide the goroutine stack of containerd when you see the error? Thanks.
kill -USR1 $(pidof containerd)
will trigger the dump and check containerd log to get stack.
(See the original theory 1 in https://github.com/containerd/containerd/issues/4604#issuecomment-1006013231)
Docker has a similar mechanism of "reserving container name" to prevent conflict. However, dockershim handles it in a different way from containerd CRI implementation.
err = docker.CreateContainer
if (err == container name conflict) {
removeErr = docker.RemoveContainer
if (removeErr == nil) {
return err
} else {
if (removeErr == "Container not found") {
randomize the container name XYZ to XYZ_<RANDOM_SUFFIX>
return docker.CreateContainer
} else {
return err
}
}
}
https://github.com/kubernetes/kubernetes/blob/release-1.19/pkg/kubelet/dockershim/helpers.go#L284
In fact, this difference of retry leads to significantly different CRI rates between dockershim and containerd. In containerd, the CreateContainer
request comes about every 10s-20s (See example in https://github.com/containerd/containerd/issues/4604#issue-716346199). But in dockershim case, theCreateContainer
request comes about every 2min. This is because the requests of hitting "failed to reserve name" are fast in containerd while the requests can take 2min with a new container name in dockershim. This applies to RunPodSandbox
as well. Therefore, it is a fact that the load of CRI requests in containerd is 10x of the load in dockershim. And I infer this makes the node further overloaded.
This theory echos with a similar bug solved in CRI-O - https://bugzilla.redhat.com/show_bug.cgi?id=1785399., in which the solution says "Now, when systems are under load, CRI-O does everything it can to slow down the Kubelet and reduce load on the system."
I believe our direction is also to slow down Kubelet sending too many requests. This might be aligned with Mike's comment- https://github.com/containerd/containerd/issues/4604#issuecomment-1013268187
Is going back to docker really the only option here?
yes, now it looks like it.
On GCP, only for a little while longer, though. Just got an email:
[Action Required] Migrate to Containerd node images before GKE v1.24
Support for Docker as a container runtime on Kubernetes nodes will be removed from OSS Kubernetes and GKE starting with v1.24. Please migrate your GKE workloads to Containerd as soon as possible.
If I'm reading https://cloud.google.com/kubernetes-engine/docs/release-schedule and https://cloud.google.com/kubernetes-engine/versioning#lifecycle correctly, 1.23 will be supported through approximately 2023-06, which is a bit in the future. (This would require you to have created a static cluster as opposed to a regular upgrading cluster.)
Personally, I'd rather figure out what's wrong here and get it fixed (but I'm currently snowed in under a bunch of other tasks, sorry).
from GKE email:
Most user workloads do not have dependencies on the container runtime. If you use a node image based on Dockershim, please migrate your GKE workloads to a Containerd node image as soon as possible.
How can we get someone from GKE on this thread?
How can we get someone from GKE on this thread?
Hi Matti, I am from GKE. We are fully aware of this issue and are prioritizing it.
That said, We are pretty convince the issue comes from one of the two following issue:
Disk IO is to high and containerd timeout because of that Starting 10-20+ pods at the same time on a single node cause a memory usage spike and at some point containerd process get killed.
@sadortun I was trying to reproduce it with high disk IO. But sorry, I didn't reproduce it. :(
If the CreateContainer doesn't return, it maybe hang in two syscalls:
Docker uses lazy-umount which might hide the problem. If someone sees this issue, please provide more details like
Thanks
@fuweid
Thanks for your time on this issue.
Unfortunately, I did stop using COS back in 2020 after we could not find a solution.
I'm 97% sure we were using overlayfs
and as for the rest I have no way to find this historical data.
Sorry about that.
@sadortun I file pr to enhance this. https://github.com/containerd/containerd/pull/6478#issuecomment-1022799322
No sure that what different between docker and containerd in GKE. sorry about that.
We're on GKE 1.21.6-gke1500 and we've been seeing this problem for the last 1-2 months
@sadortun I file pr to enhance this. #6478 (comment)
I got some good results showing this patch improves the latency of "CreateContainer".
stress-ng --io 1 -d 1 --timeout 7200 --hdd-bytes 8M
nginx
with 25 replicasCreateContainer
complete within 2 minsPlease note this is based on a couple of experiments, not ample data set. stress-ng
doesn't produce stable IOPS so the disk state cannot be the exact same in two cases.
"failed to reserve container name" error is returned by containerd CRI if there is an in-flight CreateContainer
request reserving the same container name (like below).
T1: 1st CreateContainer(XYZ) request is sent. (Timeout on Kubelet side)
T2: 2nd CreateContainer(XYZ) request is sent (Kubelet retry)
T3: 2nd CreateContainer request returns "failed to reserve container name XYZ" error
T4: 1st CreateContainer request is still in-flight…
Don't panic. Given sufficient time, the container and pod will be created successfully, as long as you are using restartPolicy:Always
or restartPolicy:OnFailure
in PodSpec.
Slow disk operations((e.g. disk throttle on GKE) are the culprit. What generates lots of disk IO can come from a number of factors: user's disk-heavy workload, big images pulling and containerd CRI implementation.
An unnecessary sync-fs
operation was found as part of CreateContainer
stack. It is the where CreateContainer
gets stuck. The sync-fs
is got rid of in https://github.com/containerd/containerd/pull/6478. Not only it makes CreateContainer
return faster, but it reduces disk IO generated by containerd.
Please note there are perhaps other undiscovered reason contributing to this problem.
restartPolicy:Always
or restartPolicy:OnFailure
in PodSpec
Description
Hi!
We are running containerd on GKE with pretty much all defaults. A dozen nodes, and a few hundreds pods. Plenty of memory and disk free.
We started to have many pods fail due to
failed to reserve container name
error in the last week or so. I do not recall any specific changes to the cluster, or containers themselves.Any help will be greatly appreciated!
Steps to reproduce the issue: I have no clue how to specifically reproduce this issue.
Cluster have nothing special, deployment is straightforward. The only thing that could be relevant is that our images are quite large, around 3Gb.
I got a few more details here : https://serverfault.com/questions/1036683/gke-context-deadline-exceeded-createcontainererror-and-failed-to-reserve-contai
Describe the results you received:
Describe the results you expected: Live an happy life, error free :)
Output of
containerd --version
:Any other relevant information: