BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.62k stars 1.59k forks source link

[Bug]: LiteLLM:ERROR: litellm_logging.py #4372

Closed nextzard closed 4 months ago

nextzard commented 4 months ago

What happened?

I'm using Fadora 40 and installed it today.

I am using ollama model: codegemma, but at the end after answering I get this message

22:42:02 - LiteLLM:ERROR: litellm_logging.py:1266 - Model=codegemma not found in completion cost map. Setting 'response_cost' to None

If I use it with llama3, there is no error.

Maybe https://github.com/BerriAI/litellm/blob/main/litellm/model_prices_and_context_window_backup.json

json file, I wonder if the error occurs if the model is not in there. I'm using it well, thanks.

Relevant log output

No response

Twitter / LinkedIn details

No response

ishaan-jaff commented 4 months ago

fixed here https://github.com/BerriAI/litellm/pull/4424

ishaan-jaff commented 4 months ago

@nextzard curious - what do you use LiteLLM for today ?

Harry989 commented 4 months ago

I am having this issue with phi3 and llava-phi3 using ollama.