BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.9k stars 1.64k forks source link

[Bug]: litellm_proxy_total_requests_metric_total metrics only work for config defined models #6445

Open tkg61 opened 3 weeks ago

tkg61 commented 3 weeks ago

What happened?

When models are defined in the config file they have the prometheus metric: litellm_proxy_total_requests_metric_total but if models are defined in gui and become "DB Model" in their status on the "models" page then they don't have this metric populated. They have other metrics that work but not this one.

Way to reproduce,

have 1 model in the config have 1 model added via the gui use both models look at the /metrics page and you will only see "litellm_proxy_total_requests_metric_total" metrics for the config model

A metric that will work for both models is: litellm_request_total_latency_metric_sum

Relevant log output

no logs for this

Twitter / LinkedIn details

No response

micahjsmith commented 2 weeks ago

overall the fact that there are any differences between "config models" and "DB models" is confusing to me and not desired