Open apilny-akamai opened 1 day ago
/area vertical-pod-autoscaler
Would it be possible to see the spec of the Pod that this is failing on? Which variant of Kubernetes are you running this on?
/triage needs-information
We use standard kubeadm, K8s Rev: v1.25.16. I've updated description with an example Pod Spec.
Hi. It seems like you added the VPA spec. I'm looking for the spec of the Pod kube-controller-manager-master-1
Which component are you using?: vertical-pod-autoscaler
What version of the component are you using?: 1.1.2
Component version:
What k8s version are you using (
kubectl version
)?: kubectl 1.25What did you expect to happen?: VPA updater does not error with
fail to get pod controller: pod=kube-scheduler-XYZ err=Unhandled targetRef v1 / Node / XYZ, last error node is not a valid owner
What happened instead?: vpa-updater log contains ` │ E1010 12:38:44.476232 1 api.go:153] fail to get pod controller: pod=kube-apiserver-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.477788 1 api.go:153] fail to get pod controller: pod=kube-controller-manager-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.547767 1 api.go:153] fail to get pod controller: pod=etcd-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.554646 1 api.go:153] fail to get pod controller: pod=kube-scheduler-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │ `
How to reproduce it (as minimally and precisely as possible): Update VPA from 0.4 to 1.1.2 and observ the vpa-updater log.
Anything else we need to know?: I've tried to update to 1.2.1 and the error is in the log again. Did not happen with vpa 0.4. I can see this error message also in already fixed issue with panic/SIGSEGV problem but nowhere else.
Example Pod Spec