Describe the bug
There is a deployment installed by helm.
Deployment object metadata labels are assigned helm specific label
Metric gatherer during querying custom metrics api server (prometheus adapter in this case) provides all available labels from deployment metadata. However, a pod instance managed by said deployment is unaware of metadata labels assigned to deployment by external agent (helm).
As a result custom metrics server returns empty list of metrics.
To Reproduce
Steps to reproduce the behavior:
Deploy a deployment using helm.
Create custom pod autoscaler targeting the deployment
Observe errors generated by cpa , for example:
I0331 23:06:53.676418 1 shell.go:90] Shell command failed, stderr: 2022/03/31 23:06:53 invalid metrics (1 invalid out of 1), first error is: failed to get pods metric: unable to get metric node_transcoder_gpu_decoder_score: unable to fetch metrics from custom metrics API: the server could not find the metric node_transcoder_gpu_decoder_score for pods
E0331 23:06:53.676462 1 main.go:289] exit status 1
metric server would return 404:
I0331 23:08:53.534868 1 httplog.go:104] "HTTP" verb="GET" URI="/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/node_transcoder_gpu_decoder_score?labelSelector=app.kubernetes.io%2Fmanaged-by%3DHelm%2Ccomponent%3Dtranscoding-agent" latency="11.1253ms" userAgent="predictive-horizontal-pod-autoscaler/v0.0.0 (linux/amd64) kubernetes/$Format" audit-ID="75d9766b-6b1d-422f-8d53-b7fce0d66e8e" srcIP="192.168.65.3:63400" resp=404
Expected behavior
CPA should not rely on arbitrary labels assigned in the process. Instead, it should use label selector which is a part of Pods
metric object.
Kubernetes Details (kubectl version):
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
phpa:
git rev-parse master
a8f581d43d689c8ae439ad12b9d2425d4419949d
Additional context
Add any other context about the problem here.
Describe the bug There is a deployment installed by helm. Deployment object metadata labels are assigned helm specific label Metric gatherer during querying custom metrics api server (prometheus adapter in this case) provides all available labels from deployment metadata. However, a pod instance managed by said deployment is unaware of metadata labels assigned to deployment by external agent (helm). As a result custom metrics server returns empty list of metrics.
To Reproduce Steps to reproduce the behavior:
Expected behavior CPA should not rely on arbitrary labels assigned in the process. Instead, it should use label selector which is a part of Pods metric object.
Kubernetes Details (
kubectl version
): Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:32:32Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"} phpa: git rev-parse master a8f581d43d689c8ae439ad12b9d2425d4419949d Additional context Add any other context about the problem here.