In the kubernetes/autoscale.md file, the current implementation uses the ts_queue_latency_microseconds metric for scaling the Horizontal Pod Autoscaler (HPA). This metric is a counter, which only increases over time and does not decrease, leading to a potential issue where the HPA will continually scale up the number of pods without scaling them down when the load decreases.
Suggest a potential alternative/fix
To resolve this issue, it is recommended to use the rate of the counter metric over a time interval to enable both scaling up and down effectively.
Use the Rate Function:
Utilize the rate function in Prometheus to calculate the rate of change of the ts_queue_latency_microseconds metric. This provides a per-second average rate of increase over a specified time window (e.g., 5 minutes).
Modify the Prometheus Adapter Configuration:
Update the configuration to transform the counter metric into a rate-based metric. Here’s how the configuration should look:
Update the metrics section in the hpa.yaml file to use the rate of the metric:
metrics:
- type: Pods
pods:
metric:
name: ts_queue_latency_per_second
target:
type: AverageValue
averageValue: 1000000 # Set your desired threshold here
Update Documentation:
Update the documentation in kubernetes/autoscale.md to reflect these changes and provide guidance on selecting appropriate target values based on the rate metric.
Why This Is Better
Using the rate of the counter metric allows the HPA to make scaling decisions based on the actual rate of change in queue latency rather than the cumulative value. This approach enables the HPA to scale pods up when the rate of incoming requests increases and scale down when the rate decreases, providing more responsive and efficient scaling behavior.
Example:
Current Configuration: If ts_queue_latency_microseconds is used directly, the HPA will see the metric as always increasing, causing continuous scaling up.
Proposed Configuration: By using sum(rate(ts_queue_latency_microseconds[5m])), the HPA can see the rate at which latency is increasing. For instance, if the rate increases to 7000 per second, the HPA will add pods. If the rate decreases to below the target value, it will scale down, allowing the system to adapt dynamically to load changes.
This improvement ensures better resource utilization and cost efficiency by aligning the number of pods with the actual workload.
📚 The doc issue
In the kubernetes/autoscale.md file, the current implementation uses the ts_queue_latency_microseconds metric for scaling the Horizontal Pod Autoscaler (HPA). This metric is a counter, which only increases over time and does not decrease, leading to a potential issue where the HPA will continually scale up the number of pods without scaling them down when the load decreases.
Suggest a potential alternative/fix
To resolve this issue, it is recommended to use the rate of the counter metric over a time interval to enable both scaling up and down effectively.
Use the Rate Function:
rate
function in Prometheus to calculate the rate of change of thets_queue_latency_microseconds
metric. This provides a per-second average rate of increase over a specified time window (e.g., 5 minutes).Modify the Prometheus Adapter Configuration:
Update the configuration to transform the counter metric into a rate-based metric. Here’s how the configuration should look:
Modify the HPA Configuration:
metrics
section in thehpa.yaml
file to use the rate of the metric:Update Documentation:
kubernetes/autoscale.md
to reflect these changes and provide guidance on selecting appropriate target values based on the rate metric.Why This Is Better
Using the rate of the counter metric allows the HPA to make scaling decisions based on the actual rate of change in queue latency rather than the cumulative value. This approach enables the HPA to scale pods up when the rate of incoming requests increases and scale down when the rate decreases, providing more responsive and efficient scaling behavior.
Example:
ts_queue_latency_microseconds
is used directly, the HPA will see the metric as always increasing, causing continuous scaling up.sum(rate(ts_queue_latency_microseconds[5m]))
, the HPA can see the rate at which latency is increasing. For instance, if the rate increases to 7000 per second, the HPA will add pods. If the rate decreases to below the target value, it will scale down, allowing the system to adapt dynamically to load changes.This improvement ensures better resource utilization and cost efficiency by aligning the number of pods with the actual workload.
@yardenhoch