Open prageethw opened 4 years ago
@prageethw this looks for me like a Kubernetes vs client-go version issue. Can you check if a cluster with version >= 1.17
works?
@szuecs tried with k8s 1.17.X still fails
known (get configmaps)
kube-metrics-adapter-7b79498f9-g42j8 kube-metrics-adapter E0717 09:33:22.393669 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-g42j8 kube-metrics-adapter E0717 09:33:22.394841 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-7bwgs kube-metrics-adapter E0717 09:33:23.017651 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kube-metrics-adapter-7b79498f9-7bwgs kube-metrics-adapter E0717 09:33:23.020493 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.ConfigMap: unknown (get configmaps)
kub
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.6-eks-4e7f64", GitCommit:"4e7f642f9f4cbb3c39a4fc6ee84fe341a8ade94c", GitTreeState:"clean", BuildDate:"2020-06-11T13:55:35Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
@prageethw I think it could be related to: https://github.com/zalando-incubator/kube-metrics-adapter/issues/142 Seems like a default RBAC rule is no longer there by default in some setups. Try the suggested steps in that issue.
@mikkeloscar I had a look in the helm chart(banzai), it seems it already exists, but still, I see the error in the logs, but It seems the metrics are pulled out successfully though, it just annoying defect it seems :) https://github.com/banzaicloud/kube-metrics-adapter/blob/master/deploy/charts/kube-metrics-adapter/templates/rbac.yaml#L42
@mikkeloscar yeah you are right it was not in collector ClusterRole though. I just added it and sent a pull to helm.
fix https://github.com/zalando-incubator/kube-metrics-adapter/pull/181 will fix this issue once it is merged.
Also tested the Fix with image v0.1.5 and worked perfectly:
time="2020-08-04T11:47:36Z" level=info msg="Found 1 new/updated HPA(s)" provider=hpa
time="2020-08-04T11:47:36Z" level=info msg="Collected 1 new metric(s)" provider=hpa
Thanks for the fix ;)
Expected Behavior
Metrics should be pulled .
Actual Behavior
Steps to Reproduce the Problem
Specifications
*1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch v1.ConfigMap: unknown (get configmaps) kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:16.970700 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch v1.ConfigMap: unknown (get configmaps) kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:17.972675 1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch v1.ConfigMap: unknown (get configmaps) kube-metrics-adapter-7b79498f9-7b8rt kube-metrics-adapter E0717 03:49:17.973213**