pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.2k stars 859 forks source link

Incorrect Metric Type for HPA Scaling #3286

Open liaddrori1 opened 2 months ago

liaddrori1 commented 2 months ago

📚 The doc issue

In the kubernetes/autoscale.md file, the current implementation uses the ts_queue_latency_microseconds metric for scaling the Horizontal Pod Autoscaler (HPA). This metric is a counter, which only increases over time and does not decrease, leading to a potential issue where the HPA will continually scale up the number of pods without scaling them down when the load decreases.

Suggest a potential alternative/fix

To resolve this issue, it is recommended to use the rate of the counter metric over a time interval to enable both scaling up and down effectively.

  1. Use the Rate Function:

    • Utilize the rate function in Prometheus to calculate the rate of change of the ts_queue_latency_microseconds metric. This provides a per-second average rate of increase over a specified time window (e.g., 5 minutes).
  2. Modify the Prometheus Adapter Configuration:

    • Update the configuration to transform the counter metric into a rate-based metric. Here’s how the configuration should look:

      rules:
      - seriesQuery: 'ts_queue_latency_microseconds'
      resources:
       overrides:
         namespace:
           resource: namespace
         pod:
           resource: pod
      name:
       matches: "^(.*)_microseconds$"
       as: "${1}_per_second"
      metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)'
  3. Modify the HPA Configuration:

    • Update the metrics section in the hpa.yaml file to use the rate of the metric:
      metrics:
      - type: Pods
       pods:
         metric:
           name: ts_queue_latency_per_second
         target:
           type: AverageValue
           averageValue: 1000000  # Set your desired threshold here
  4. Update Documentation:

    • Update the documentation in kubernetes/autoscale.md to reflect these changes and provide guidance on selecting appropriate target values based on the rate metric.

Why This Is Better

Using the rate of the counter metric allows the HPA to make scaling decisions based on the actual rate of change in queue latency rather than the cumulative value. This approach enables the HPA to scale pods up when the rate of incoming requests increases and scale down when the rate decreases, providing more responsive and efficient scaling behavior.

Example:

This improvement ensures better resource utilization and cost efficiency by aligning the number of pods with the actual workload.

@yardenhoch

mreso commented 2 months ago

Thanks for flagging this @liaddrori1 @namannandan do you have bandwidth to look at this?