BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.16k stars 1.68k forks source link

Ensure cost per token is float. #6811

Open haringsrob opened 6 days ago

haringsrob commented 6 days ago

Title

When you pass in a quoted string in the config, it will append the values (on streamed responses) and it will fail to do the calculation. In addition to that, if you do not quote the floats using the helm charts, they are converted to their scientific notation.

There's no validation at the moment, this results in something like this:

adding spend to team db. Response cost: 1e-061e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e-065e- │
│ 065e-065e-065e-065e-06. team_id: None.

By casting these values, if present, to make sure they are floats we can fix this.

Relevant issues

https://github.com/BerriAI/litellm/issues/6641

Type

🐛 Bug Fix

Changes

Changes the register_model to ensure it casts to float.

vercel[bot] commented 6 days ago

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Nov 19, 2024 1:05pm