Closed danports closed 1 month ago
Hi @danports, thanks for bringing this to our attention, this is unfortunate.
In the short term, if you're blocked by this and are on a kOps cluster, consider switching to the upstream volume modification via VolumeAttributeClass feature (which we are trying to get turned on by default in Kubernetes 1.31 so EKS customers can start using it)
In the meantime I'll go setup some clusters with constant volume modifications and try to spot any misbehaving sidecars so we can track this bug down.
Again, thank you! 🙏
Thanks @AndrewSirenko. I'll take a look at the feature you mentioned. This isn't a huge blocker since the issue occurs only infrequently - about 1-2x/month, though I don't have sufficient archived telemetry to confirm the earlier problems are the same issue - and can be worked around by killing the pod. It looks like the problem started in late March, shortly after upgrading to Kubernetes 1.29 from 1.28, which included an upgrade from registry.k8s.io/sig-storage/csi-resizer:v1.4.0
to the current public.ecr.aws/ebs-csi-driver/volume-modifier-for-k8s:v0.2.1
according to the cluster's kOps logs.
Also, anything I can do in the cluster (log verbosity, memory dumps, etc.) to help collect troubleshooting data if/when the issue recurs?
Thanks for pointing to the regression occurring between 0.1.4 & 0.2.1, we made a few significant changes in between.
And if you could bump up the sidecar's verbosity to 7 that would be helpful. Thanks a million!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Just saw this issue arise again today with public.ecr.aws/ebs-csi-driver/volume-modifier-for-k8s:v0.3.0
. I bumped the sidebar verbosity up to 7 as requested and will update if I see anything useful in the logs the next time this recurs.
/kind bug
What happened? Occasionally the volumemodifier container in one of the EBS CSI controller pods starts using the maximum CPU possible (stuck in some kind of busy wait perhaps?). Here's a chart showing CPU usage for volumemodifier containers in a cluster over 3 days - there are only 2 vCPUs on the nodes so the container is basically using up all of the CPU time available: You can see from the chart that the lease likely switched from one container to another a couple of times but the container continued pegging the CPU regardless of which pod it was running in.
What you expected to happen? Container should be using a miniscule amount of CPU time like it usually does.
How to reproduce it (as minimally and precisely as possible)? Good question! I've only noticed this problem fairly recently - I can try to track down when it started.
Anything else we need to know?: I have two replicas of the controller running, but usually only one of the replicas has a volumemodifier container running out of control, probably the one with the active lease. The logs for the container look like this:
It seems like killing the CPU-hogging pod is sufficient to resolve the problem until it recurs.
Environment
kubectl version
): 1.29.4public.ecr.aws/ebs-csi-driver/volume-modifier-for-k8s:v0.2.1@sha256:78c116f223997fa8d074846bf10e1a08cc0b723dc03c7a20900685442b5a3504