Open FilimonovEugene opened 8 months ago
Perhaps due to the short pod time, the metrics server did not obtain monitoring data. any related log or is more detail required?
That can definitely be a reason, also I would expect VPA to react on OOMKilled events and increase the memory target recommendation for the next pod start.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Which component are you using?:
vertical-pod-autoscaler
What version of the component are you using?:
Component version: 1.0.0
What k8s version are you using (
kubectl version
)?:v1.25.14-eks-f8587cb
What environment is this in?:
AWS EKS
What did you expect to happen?:
VPA should track CronJob OOMKilled events and adjust resources requests and limits.
What happened instead?:
VPA doesn't react to job OOMKilled events
How to reproduce it (as minimally and precisely as possible):