Closed mattnworb closed 5 years ago
Yep, this is fixed in head by #1134. We're planning to release 0.3.0 sometime in the next two weeks.
@bskiba any update on the 0.3.0 release?
We're seeing a very similar issue on the updater. Is that the same issue, or seperate?
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xe572b3]
goroutine 1 [running]:
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority.(*UpdatePriorityCalculator).getUpdatePriority(0xc420b1fb00, 0xc420b69898, 0xc4220bb0c0, 0xc420b69898, 0xc4220bb0c0, 0x0)
/usr/local/google/home/bskiba/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go:121 +0x7b3
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority.(*UpdatePriorityCalculator).AddPod(0xc420b1fb00, 0xc420b69898, 0xc42031b3a0, 0xbeeedeeffa5d1c4b, 0xe07eb56b9, 0x17bb6e0)
/usr/local/google/home/bskiba/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go:75 +0x1b3
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic.(*updater).getPodsForUpdate(0xc42039fec0, 0xc42030c780, 0x1, 0x1, 0xc42019ec40, 0xc42030c780, 0x1, 0x1)
/usr/local/google/home/bskiba/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic/updater.go:123 +0x1d9
k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic.(*updater).RunOnce(0xc42039fec0)
/usr/local/google/home/bskiba/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/logic/updater.go:102 +0xa3f
main.main()
/usr/local/google/home/bskiba/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/updater/main.go:55 +0x16e
Different issue, but fixed here on the master: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/updater/priority/update_priority_calculator.go#L148
Sorry for the delay on 0.3.0, I expect to be able to work on it next week.
Update: I'm currently testing the new image, should be able to release around Tuesday next week.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Both 0.3.0 and 0.3.1 are available and should be free of this issue. /close
@bskiba: Closing this issue.
@bskiba thanks for the update
At first glance this looks similar to #1258 but my panic and stacktrace looks different, also note the pod is up and running ok for some time before it panics:
This is with the image k8s.gcr.io/vpa-recommender:0.2.0