kubernetes / autoscaler

Autoscaling components for Kubernetes
Apache License 2.0
7.81k stars 3.87k forks source link

VPA Recommender fails to record `OOM` and increase resources due to `KeyError` #6705

Open nikimanoledaki opened 2 months ago

nikimanoledaki commented 2 months ago

Which component are you using?: vertical-pod-autoscaler

What version of the component are you using?: Component version: 1.0.0 | https://artifacthub.io/packages/helm/fairwinds-stable/vpa/4.0.1

What k8s version are you using (kubectl version)?: 1.27

What environment is this in?: AKS

What did you expect to happen?: Expected VPA to recover from an OOM issue by raising the resource limits.

What happened instead?:

We had a rather disruptive OOM loop that lasted for an hour (until the VPA CR's auto updates were disabled). It involved the VPA Updater & Recommender trying and failing to react to a VPA target Pod being OOMKilled. VPA recommended a very low memory request and limit for a Pod, which was immediately OOM'ed, evicted, OOM'ed, evicted again, etc. VPA should have been able to end this loop by raising the resource limit but it wasn't able to do that since the Pod was non-existant.

Here is the sequence of events:

  1. Updater accepts Pod for update. update_priority_calculator.go:143] pod accepted for update <ns>/<pod> with priority 0.2350253824929428

  2. Updater evicts Pod to apply resource recommendation. updater.go:215] evicting pod <pod>-69856fc5f7-m848k

  3. Recommender deletes Pod. cluster_feeder.go:401] Deleting Pod {<ns> <pod>-69856fc5f7-m848k}

  4. Recommender detects OOM in Pod. cluster_feeder.go:445] OOM detected {Timestamp:<date> 11:08:54 +0000 UTC Memory:104857600 ContainerID:{PodID:{Namespace:<ns> PodName:<pod>} ContainerName:<container>}}

  5. Updater detects quick OOM in Pod. update_priority_calculator.go:114] quick OOM detected in pod <ns>/<pod>, container <container>

  6. Recommender deletes Pod (again). cluster_feeder.go:401] Deleting Pod {<ns> <pod>-69856fc5f7-d8hvv}

  7. Recommender detects OOM in Pod (again). cluster_feeder.go:445] OOM detected {Timestamp:<date> 11:08:56 +0000 UTC Memory:104857600 ContainerID:{PodID:{Namespace:<ns> PodName:<pod>} ContainerName:<container>}}

  8. Recommender immediately fails to record OOM - Reason: KeyError. cluster_feeder.go:447] Failed to record OOM {Timestamp:<date> 11:08:56 +0000 UTC Memory:104857600 ContainerID:{PodID:{Namespace:<ns> PodName:<pod>} ContainerName:<container>}}. Reason: KeyError: {<ns> <pod>-69856fc5f7-d8hvv}

  9. repeat loop

From the codebase, it looks like that KeyError is returned when processing a non-existent pod (here). The pod didn't exist by the time the VPA Recommender tried to record the OOM. Due to this reason, the backup mechanism with the Custom memory bump-up after OOMKill wasn't able to increase the resource since VPA returned earlier with a KeyErorr (here) and it couldn't reach that point.

We took the following steps to stop the bleeding:

How to reproduce it (as minimally and precisely as possible):

The error could be reproduced by having a memory-intensive workload that OOMs very quickly.

voelzmo commented 2 months ago

Hey @nikimanoledaki thanks for filing this and the great investigation! We've been observing similar behavior where the recommender drops down to the minAllowed values. So far, most occurrences seem to have been connected to the vpa-recommender restarting right before the recommendation dropped, maybe this is something you can also look at in your environments. I'd love to compare notes about the investigations at some point!

I tried fixing some part of this a while ago with https://github.com/kubernetes/autoscaler/pull/5326 but that doesn't help if a KeyError prevents actually adding reasonable memory samples to increase the recommendations ;(

akloss-cibo commented 1 month ago

We've been using 1.1.1 for a while and while it isn't explicitly causing us trouble, we still get KeyError on OOM kill events pretty regularly, maybe 5 times daily.

nikimanoledaki commented 1 month ago

Thank you @voelzmo for your feedback!

most occurrences seem to have been connected to the vpa-recommender restarting right before the recommendation dropped

The VPA Recommender existed since at least 24h before the update occurred so I'm not sure that it was due to this unfortunately. A few other pieces of info that we found in the meantime:

Thank you @akloss-cibo! Do you also see a log that these are quick OOM kill events? e.g. update_priority_calculator.go:114] quick OOM detected in pod <ns>/<pod>, container <container>? Do these self-resolve after some time, or else, what do you do?

adrianmoisey commented 1 month ago

/area vertical-pod-autoscaler

akloss-cibo commented 1 month ago

No, we don't see quick OOM detected messages at all. We see messages like:

W0522 23:52:40.499263       1 cluster_feeder.go:447] Failed to record OOM {Timestamp:2024-05-22 23:50:42 +0000 UTC Memory:2621440000 ContainerID:{PodID:{Namespace:thingy-dev PodName:thingy-dev-745886787d-9rhv9} ContainerName:thingy}}. Reason: KeyError: {thingy-dev thingy-dev-745886787d-9rhv9}

Right now, these don't appear to be creating a problem for us. Most of the time, it is logged (unhelpfully) for a pod that doesn't have a VPA covering it at all, but we do see it occasionally for pods that do have a VPA.

nikimanoledaki commented 1 month ago

Interesting. Have you considered using VPA in memory-saver mode? I'm looking into this now. It would skip the KeyError log - see codebase here. At first I didn't want to skip this error since it pointed to this issue but now I'm not convinced about the value of this error after all - curious to hear what you think 🤔

Sidenote - memory-saver could also solve the VPA memory leak and reduce the resource utilization of VPA itself: https://github.com/kubernetes/autoscaler/issues/6368

akloss-cibo commented 1 month ago

I'm not deeply familiar with VPA code, but my general purpose programming instinct isn't to ignore a KeyError like this; something isn't populating clusterState that should be. I suppose if clusterState is populated by something scheduled this is just a race condition, in which case it is "normal" for an OOM to occur for something not in clusterState; if this is true, I think the better solution is that clusterState.AddSample should handle this gracefully and add the appropriate entry to clusterState; it definitely shouldn't return a KeyError; even returning "PodNotKnownError" would be better than KeyError

I may try memory saver mode anyway though; we do a fair amount of batch processing in Kubernetes which creates a lot of pods without VPA on occasion. (I have a VPA for vpa-recommender itself to handle this.) This leads me to wonder, why does the VPA track pods that don't have a VPA (aka why isn't memory-saver the only mode for VPA)?

voelzmo commented 1 month ago

This leads me to wonder, why does the VPA track pods that don't have a VPA (aka why isn't memory-saver the only mode for VPA)?

VPAs standard implementation for historic storage is VPACheckpoints, which means that if you decide to start tracking a certain workload with VPA, it takes ~8 days to get the most accurate recommendation. If you don't run in memory-saver mode, the VPA already gathers data about the containers in memory, so if you would decide to enable VPA for a new container, and the recommender was already running for 8 days, you would get accurate recommendations straight away.

I absolutely think --memory-saver=true should be the default, but depending on your use-cases, not running in memory-saver mode can be a useful thing.

nikimanoledaki commented 1 month ago

This is great feedback, @akloss-cibo & @voelzmo! My gut feeling was also to not avoid the KeyErrors which is why we have not enabled memory-saver yet. However the use case that @voelzmo describes makes sense to me. It sounds like memory-saver could make VPA easier to maintain. Currently the main issue with maintaining VPA in our large Kubernetes clusters is the frequent OOM Kill errors, which could indicate that VPA's Updater/Recommender could be losing state, which could be causing the bug described earlier in this issue.

See this feature request for HA VPA + my comment here: https://github.com/kubernetes/autoscaler/issues/6846#issuecomment-2136860926

voelzmo commented 1 month ago

My gut feeling was also to not avoid the KeyErrors which is why we have not enabled memory-saver yet.

This is true: enabling memory-saver mode will mask the KeyErrors which arise due to other reasons. So if you don't have this mode enabled and still see KeyErrors (like described in this bug's OP), then this might be an indication that something else is going wrong. So for investigating this bug, it will probably be helpful to leave memory-saver=false.

nikimanoledaki commented 2 weeks ago

Hi folks! I have some updates and questions regarding the original issue.

We've seen this issue occur when VPA caps the memory req/lim to minAllowed. This value was set too low so the workload was OOMKilled almost immediately after spinning up and VPA did not have time to update its resources. The Pod is not found, hence the KeyError. Increasing minAllowed fixes the OOMKill issue.

However, some workloads have a memory utilization that can vary widely e.g. 100MiB - 1GiB+. Increasing minAllowed for all of them means that some are overprovisioned. This increases the cost significantly.

How does the VPA Updater decide to cap to minAllowed? The circumstances for this recommendation are unclear to me.

Secondly, for some workloads, this capping decision seems incorrect based on their previous utilization data. I'm not sure how to investigate this in the codebase. Do you think this could be due to the wide range of memory utilization for this type of workload?