pytorch / serve

Serve, optimize and scale PyTorch models in production
https://pytorch.org/serve/
Apache License 2.0
4.23k stars 864 forks source link

System metrics are not shown on port 8082 & Making inference API results as metrics #729

Closed yunjjun closed 4 years ago

yunjjun commented 4 years ago

Hello, If you look at this page, torchserve provides System metrics. (https://pytorch.org/serve/metrics.html)

However, when torchserve is included in prometheus and looked up, System metrics are not shown. I wonder if there is a need to change the setting or if there is a special method. (https://pytorch.org/serve/metrics_api.html)

Additionally, I am wondering if it is possible to make the value of the inference API in the form of metrics and send it to prometheus.

Thank you for your reply.

harshbafna commented 4 years ago

@yunjjun: Metrics logging and metrics API are two different things in TorchServe :-)

Please refer following documentation for metrics_api and integration with Prometheus:

https://pytorch.org/serve/metrics_api.html

yunjjun commented 4 years ago

Thanks for answering! I have two questions about your answer. └ 1. Why did you do that? └ 2. Are there any workarounds or plans to expose Metrics (logging) to Metrics API?

And I think you were confused by asking two questions at once. Another question is how to get metrics from #1 like #2 from Prometheus. Please let me know about it :-)

[#1] image

[#2] image

harshbafna commented 4 years ago

@yunjjun: The current matrics API does not return System Matrix data and is tightly coupled to Prometheus and returns the data in Prometheus specific format.

There is a work in progress (#722) where :

This enhancement will roll out with the next release.

yunjjun commented 4 years ago

Thank you so much for answering ! 👍 I'm looking forward to seeing the next release soon :)

hungtooc commented 2 years ago

its 2022 and system metrics not added to API metrics yet