pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.19k stars 858 forks source link

It seems like `metrics.yaml` doesn't apply #2963

Closed feeeper closed 8 months ago

feeeper commented 8 months ago

I'm running TorchServe in WSL2. There are three issues with the metrics:

  1. Even if metrics_config parameter in ts.config points to non-existing file everything works without any problems. It looks like the parameter doesn't work in my case
  2. Even if I comment/remove some metrics (ts_metrics or model_metrics) I can see these metrics in the ts_metrics.log or model_metrics.log
  3. I can't get some of the ts metrics via metrics API. There are only ts_queue_latency_microseconds and ts_inference_requests_total metrics

ts.config:

models={\
  "doc_model": {\
    "1.0": {\
        "defaultVersion": true,\
        "minWorkers": 1,\
        "maxWorkers": 1,\
        "batchSize": 1\
    }\
  }\
}
inference_address=http://0.0.0.0:8080
management_address=http://0.0.0.0:8081
metrics_address=http://0.0.0.0:8082
metrics_mode=prometheus
metrics_config=./metrics.yaml

number_of_netty_threads=32
job_queue_size=1000
model_store=/home/model-server/model-store
workflow_store=/home/model-server/wf-store

metrics.yaml:

dimensions:
  - &model_name "ModelName"
  - &worker_name "WorkerName"
  - &level "Level"
  - &device_id "DeviceId"
  - &hostname "Hostname"

ts_metrics:
  counter:
    # - name: Requests2XX
    #   unit: Count
    #   dimensions: [*level, *hostname]
    - name: Requests4XX
      unit: Count
      dimensions: [*level, *hostname]
    # - name: Requests5XX
    #   unit: Count
    #   dimensions: [*level, *hostname, *model_name]
    - name: ts_inference_requests_total
      unit: Count
      dimensions: [*level,"model_name", "model_version", "hostname"]
    - name: ts_inference_latency_microseconds
      unit: Microseconds
      dimensions: ["model_name", "model_version", "hostname"]
    - name: ts_queue_latency_microseconds
      unit: Microseconds
      dimensions: ["model_name", "model_version", "hostname"]

  histogram:
    - name: NameOfHistogramMetric
      unit: ms
      dimensions: [*model_name, *level]

  gauge:
    - name: QueueTime
      unit: Milliseconds
      dimensions: [*level, *hostname]
    - name: WorkerThreadTime
      unit: Milliseconds
      dimensions: [*level, *hostname]
    - name: WorkerLoadTime
      unit: Milliseconds
      dimensions: [*worker_name, *level, *hostname]
    - name: CPUUtilization
      unit: Percent
      dimensions: [*level, *hostname]
    - name: MemoryUsed
      unit: Megabytes
      dimensions: [*level, *hostname]
    - name: MemoryAvailable
      unit: Megabytes
      dimensions: [*level, *hostname]
    - name: MemoryUtilization
      unit: Percent
      dimensions: [*level, *hostname]
    - name: DiskUsage
      unit: Gigabytes
      dimensions: [*level, *hostname]
    - name: DiskUtilization
      unit: Percent
      dimensions: [*level, *hostname]
    - name: DiskAvailable
      unit: Gigabytes
      dimensions: [*level, *hostname]
    - name: GPUMemoryUtilization
      unit: Percent
      dimensions: [*level, *device_id, *hostname]
    - name: GPUMemoryUsed
      unit: Megabytes
      dimensions: [*level, *device_id, *hostname]
    - name: GPUUtilization
      unit: Percent
      dimensions: [*level, *device_id, *hostname]

model_metrics:
  # Dimension "Hostname" is automatically added for model metrics in the backend
  gauge:
    - name: HandlerTime
      unit: ms
      dimensions: [*model_name, *level]
    - name: PredictionTime
      unit: ms
      dimensions: [*model_name, *level]

metrics API output:

# HELP ts_inference_latency_microseconds Cumulative inference duration in microseconds
# TYPE ts_inference_latency_microseconds counter
ts_inference_latency_microseconds{uuid="57984a8f-c19a-4c93-b9cd-cc7eb8b1fa55",model_name="doc_model",model_version="default",} 4925586.4
# HELP ts_inference_requests_total Total number of inference requests.
# TYPE ts_inference_requests_total counter
ts_inference_requests_total{uuid="57984a8f-c19a-4c93-b9cd-cc7eb8b1fa55",model_name="doc_model",model_version="default",} 3.0
# HELP ts_queue_latency_microseconds Cumulative queue duration in microseconds
# TYPE ts_queue_latency_microseconds counter
ts_queue_latency_microseconds{uuid="57984a8f-c19a-4c93-b9cd-cc7eb8b1fa55",model_name="doc_model",model_version="default",} 291.4

Here is how I build mar:

 torch-model-archiver \
--model-name doc_model \
--version 1.0 \
--serialized-file model/pytorch_model.bin \
--handler ./src/transformers_vectorizer_handler.py \
--extra-files "./model/config.json,./tokenizer" \
-f
    mkdir -p model_store && mv doc_model.mar model_store/

And how I start TS:

torchserve \
--start \
--model-store model_store \
--models doc_model=doc_model.mar \
--ncs \
--ts-config ./ts.config

Versions:

- torchserve==0.6.0
- torch-model-archiver==0.6.0

UPD: I've updated torchserve and torch-model-archiver to 0.9.0 and now I can see more metrics from metrics.yaml:

# HELP MemoryUtilization Torchserve prometheus gauge metric with unit: Percent
# TYPE MemoryUtilization gauge
MemoryUtilization{Level="Host",Hostname="msi",} 15.1
# HELP DiskUtilization Torchserve prometheus gauge metric with unit: Percent
# TYPE DiskUtilization gauge
DiskUtilization{Level="Host",Hostname="msi",} 84.1
# HELP GPUMemoryUtilization Torchserve prometheus gauge metric with unit: Percent
# TYPE GPUMemoryUtilization gauge
GPUMemoryUtilization{Level="Host",DeviceId="0",Hostname="msi",} 21.27685546875
# HELP DiskUsage Torchserve prometheus gauge metric with unit: Gigabytes
# TYPE DiskUsage gauge
DiskUsage{Level="Host",Hostname="msi",} 200.3065414428711
# HELP NameOfHistogramMetric Torchserve prometheus histogram metric with unit: ms
# TYPE NameOfHistogramMetric histogram
# HELP ts_inference_requests_total Torchserve prometheus counter metric with unit: Count
# TYPE ts_inference_requests_total counter
ts_inference_requests_total{model_name="doc_model",model_version="default",hostname="msi",} 4.0
# HELP MemoryUsed Torchserve prometheus gauge metric with unit: Megabytes
# TYPE MemoryUsed gauge
MemoryUsed{Level="Host",Hostname="msi",} 3460.9609375
# HELP WorkerThreadTime Torchserve prometheus gauge metric with unit: Milliseconds
# TYPE WorkerThreadTime gauge
WorkerThreadTime{Level="Host",Hostname="msi",} 3.0
# HELP WorkerLoadTime Torchserve prometheus gauge metric with unit: Milliseconds
# TYPE WorkerLoadTime gauge
WorkerLoadTime{WorkerName="W-9000-doc_model_1.0",Level="Host",Hostname="msi",} 7894.0
# HELP ts_inference_latency_microseconds Torchserve prometheus counter metric with unit: Microseconds
# TYPE ts_inference_latency_microseconds counter
ts_inference_latency_microseconds{model_name="doc_model",model_version="default",hostname="msi",} 1.66192689E7
# HELP DiskAvailable Torchserve prometheus gauge metric with unit: Gigabytes
# TYPE DiskAvailable gauge
DiskAvailable{Level="Host",Hostname="msi",} 37.860321044921875
# HELP GPUMemoryUsed Torchserve prometheus gauge metric with unit: Megabytes
# TYPE GPUMemoryUsed gauge
GPUMemoryUsed{Level="Host",DeviceId="0",Hostname="msi",} 1743.0
# HELP PredictionTime Torchserve prometheus gauge metric with unit: ms
# TYPE PredictionTime gauge
PredictionTime{ModelName="doc_model",Level="Model",Hostname="msi",} 2738.21
# HELP MemoryAvailable Torchserve prometheus gauge metric with unit: Megabytes
# TYPE MemoryAvailable gauge
MemoryAvailable{Level="Host",Hostname="msi",} 21698.54296875
# HELP CPUUtilization Torchserve prometheus gauge metric with unit: Percent
# TYPE CPUUtilization gauge
CPUUtilization{Level="Host",Hostname="msi",} 0.0
# HELP QueueTime Torchserve prometheus gauge metric with unit: Milliseconds
# TYPE QueueTime gauge
QueueTime{Level="Host",Hostname="msi",} 0.0
# HELP HandlerTime Torchserve prometheus gauge metric with unit: ms
# TYPE HandlerTime gauge
# HELP Requests4XX Torchserve prometheus counter metric with unit: Count
# TYPE Requests4XX counter
# HELP ts_queue_latency_microseconds Torchserve prometheus counter metric with unit: Microseconds
# TYPE ts_queue_latency_microseconds counter
ts_queue_latency_microseconds{model_name="doc_model",model_version="default",hostname="msi",} 6382220.600000001
# HELP GPUUtilization Torchserve prometheus gauge metric with unit: Percent
# TYPE GPUUtilization gauge
GPUUtilization{Level="Host",DeviceId="0",Hostname="msi",} 0.0
feeeper commented 8 months ago

All problems gone when I've updated TS to 0.9.0 version