Closed julianstephen closed 7 years ago
@julianstephen Sorry for the state-of-flux right now, yes that's the case. For v1.7 I'm updating to use @DirectXMan12's adapter instead of my hand-hacked one: https://github.com/DirectXMan12/k8s-prometheus-adapter
Also, as pointed out, the priority scheme changed from AA alpha to beta, will fix. Feel free to send a PR any time though when you find things like this :)
Thanks. I will do that.
I am using the latest version (as of when this issue was reported. Commit 1e696ace333..). There seems to be an issue with hpa getting the value of the custom-metric exposed by sample-metrics-app. When I curl the custom-metrics api url
https://${CM_API}/apis/custom-metrics.metrics.k8s.io/.....http_requests_total
, the value field in the result result json is always 0. Even after I run the load generator. I checkedkubectl logs custom-metrics-apiserver
logs to find that the prometheus query issued by the metrics-apiserver is always getting an empty vectorThe sample-metrics app is exposing the value correctly. When I curl
sample-metrics-app-ip:9090/metrics
, I can seeWhen I curl the prometheus end-point, things get a little fishy. I see
These values doesn't seem to reflect the value exposed by sample-metrics-app. I am not sure if the link to Prometheus is broken from both sides (prom to app and prom to apiserver).
As a quick aside, with the upgrage to rc-1, custom metrics APIService needs two additional params:
I don't believe this has any effect in what I am reporting, but for completeness sake, I set groupPriorityMinimum to 200 and versionPriority to 20.
[Update]: Just tried with the previous commit (before udpatating prom-operator version) and things seem fine 👍. Though in both versions, deleting the prometheus object (kubectl delete prometheus sample-metrics-prom) seem to slowdown all further queries to api-server.