What happened?:
I am trying to handle a custom metric:
ping_request_count{instance="172.30.27.146:8090", job="sample_go_server"}
It is generated and published into prometheus by a program running outside, not in a k8s pod
What did you expect to happen?:
I expected that prometheus adapter will make it available via custom api with configuration as given below.
But that is not happening.
I have tried several variations, but failed to make it work.
Unable to see the metric mapped by using command: kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq
What happened?: I am trying to handle a custom metric: ping_request_count{instance="172.30.27.146:8090", job="sample_go_server"}
It is generated and published into prometheus by a program running outside, not in a k8s pod
What did you expect to happen?: I expected that prometheus adapter will make it available via custom api with configuration as given below. But that is not happening. I have tried several variations, but failed to make it work.
Unable to see the metric mapped by using command: kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq
Please provide the prometheus-adapter config:
prometheus-adapter config
- seriesQuery: 'ping_request_count' resources: overrides: name: matches: "ping_request_count" as: "ping_request_count" metricsQuery: sum(<<.Series>>)Please provide the HPA resource used for autoscaling:
HPA yaml
Please provide the HPA status:
Please provide the prometheus-adapter logs with -v=6 around the time the issue happened:
prometheus-adapter logs
Anything else we need to know?: Does the adapter work only for custom metrics published from pods running within k8s, having following mapping :
resources: overrides: kubernetes_namespace: {resource: "namespace"} kubernetes_pod_name: {resource: "pod"}
Environment:
kubectl version
): v1.23.1