Closed chinglinwen closed 3 years ago
+1
+1
Have a look at the code, If it need just to change the version that would be easy, it turns out that v1 .Spec
deleted the Metrics
fields, that's complicated to change.
https://godoc.org/k8s.io/api/autoscaling/v1#HorizontalPodAutoscalerSpec vs https://godoc.org/k8s.io/api/autoscaling/v2beta1#HorizontalPodAutoscalerSpec
extra Metrics fields
$ grep -r autoscaling|grep k8s.io|grep -v '^vendor'
internal/store/verticalpodautoscaler.go: autoscaling "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2"
internal/store/horizontalpodautoscaler.go: autoscaling "k8s.io/api/autoscaling/v2beta1"
internal/store/builder.go: autoscaling "k8s.io/api/autoscaling/v2beta1"
internal/store/builder.go: vpaautoscaling "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2"
internal/store/verticalpodautoscaler_test.go: k8sautoscaling "k8s.io/api/autoscaling/v1"
internal/store/verticalpodautoscaler_test.go: autoscaling "k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1beta2"
internal/store/horizontalpodautoscaler_test.go: autoscaling "k8s.io/api/autoscaling/v2beta1"
here's the easier re-produce steps (with k8s version: v1.17.2 too)
$ ./kube-state-metrics --port=8080 --telemetry-port=8081 --kubeconfig=/home/wen/.kube/config --resources=verticalpodautoscalers
I0520 23:50:01.405957 16922 main.go:89] Using resources verticalpodautoscalers
I0520 23:50:01.406045 16922 main.go:98] Using all namespace
I0520 23:50:01.406061 16922 main.go:119] metric allow-denylisting: Excluding the following lists that were on denylist:
I0520 23:50:01.408708 16922 main.go:166] Testing communication with server
I0520 23:50:01.419107 16922 main.go:171] Running with Kubernetes cluster version: v1.17. git version: v1.17.2. git tree state: clean. commit: 59603c6e503c87169aea6106f57b9f242f64df89. platform: linux/amd64
I0520 23:50:01.419140 16922 main.go:173] Communication with server successful
I0520 23:50:01.419352 16922 main.go:207] Starting metrics server: 0.0.0.0:8080
I0520 23:50:01.419393 16922 main.go:182] Starting kube-state-metrics self metrics server: 0.0.0.0:8081
I0520 23:50:01.419459 16922 metrics_handler.go:96] Autosharding disabled
I0520 23:50:01.419545 16922 builder.go:157] Active resources: verticalpodautoscalers
E0520 23:50:01.421524 16922 reflector.go:153] /home/wen/git/kube-state-metrics/internal/store/builder.go:352: Failed to list *v1.VerticalPodAutoscaler: the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
E0520 23:50:02.423263 16922 reflector.go:153] /home/wen/git/kube-state-metrics/internal/store/builder.go:352: Failed to list *v1.VerticalPodAutoscaler: the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
E0520 23:50:03.424910 16922 reflector.go:153] /home/wen/git/kube-state-metrics/internal/store/builder.go:352: Failed to list *v1.VerticalPodAutoscaler: the server could not find the requested resource (get verticalpodautoscalers.autoscaling.k8s.io)
my diff here: https://gist.github.com/chinglinwen/8af2ae3a4bf2bcd90ae6cd19b11495b9 \ this patch doesn't fix the issue (just for the experiments).
Hello! Please note that verticalpodautoscalers
are only opt-in and should not be used if you don't have the required CRDs for VPAs.
You can do either of the two i) The solution here is to disable VPA's here (assuming you're using helm charts).
OR
ii) Install the CRDs : https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/deploy/vpa-beta2-crd.yaml
An update: If you have used the helm charts and are facing this issue, please use the latest Chart version: https://github.com/helm/charts/blob/master/stable/kube-state-metrics/Chart.yaml
We have turned off vpas by default.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST?:
What happened:
apply kube-state-metrics from helm template
What you expected to happen: no failed to list v1beta2.VerticalPodAutoscaler error
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):quay.io/coreos/kube-state-metrics:v1.9.5