BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.54k stars 1.46k forks source link

fix add ollama codegemma to model cost map #4424

Closed ishaan-jaff closed 3 months ago

ishaan-jaff commented 3 months ago

Title

Relevant issues

Type

๐Ÿ†• New Feature ๐Ÿ› Bug Fix ๐Ÿงน Refactoring ๐Ÿ“– Documentation ๐Ÿš„ Infrastructure โœ… Test

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

If UI changes, send a screenshot/GIF of working UI fixes

vercel[bot] commented 3 months ago

The latest updates on your projects. Learn more about Vercel for Git โ†—๏ธŽ

Name Status Preview Comments Updated (UTC)
litellm โœ… Ready (Inspect) Visit Preview ๐Ÿ’ฌ Add feedback Jun 26, 2024 7:57pm
hemangjoshi37a commented 2 months ago

getting same error with llama3.1:70B : LiteLLM:ERROR: litellm_logging.py:1274 - Model=llama3.1:70b not found in completion cost map. Setting 'response_cost' to None