Open aknam opened 1 year ago
I'm not exactly sure what the described problem is, but I think it's a problem of how often the HPA fetches metrics. You can see tha kube-metrics-adapter
is up to date by querying the metric like this:
kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/queue-length?labelSelector=foo%3Dbar" | jq '.'
# (where `foo%3Dbar` is the label selector of the targeted deployment)
Expected Behavior
we want to get average value getting reflected at faster rate so that it would not impact the downtime in the application.
Actual Behavior
we are using custom metric for our application at specified json-path interval as 1 sec as below but when we are restarting all the pods then its taking more than 5 minutes to get zero average value and in the logs we see all our metrics are zero as we restarted all the pods. so is there any way to control this average value at faster rate so it would not impact the application.
hpa configuations annotations
metric-config.pods.used-pool-length.json-path/interval: "1s"
kube metric adapter logs