Closed okhat closed 1 month ago
hi @okhat is this on a later version of litellm? The line you're referring to is now a debug message
this is as of v1.44.25+ - https://github.com/BerriAI/litellm/commit/0295a2256117c574284794d180c7ee15245c69c1
This is still not fixed now on the latest LiteLLM, many users are complaining @krrishdholakia .
model = "openai/meta-llama/Meta-Llama-3-8B"
api_base="http://localhost:7504/v1" # SGLang backend
It's probably coming from here:
+1 to this. This blocked me from doing some of my work today because the outputs on every DSPy call would crash my Jupyter notebook.
I ended up commenting out the part of cost_calculator that raises an error as a temp fix
Acknowledging this. Will have this fixed by tomorrow.
Fix pushed - https://github.com/BerriAI/litellm/commit/cb7a60a8fa3df561f8075088caed959a8f56ce1c
Will be live in tomorrow's release
Thank you @krrishdholakia !!
What happened?
Following up from #5597. LiteLLM is now in DSPy 2.4.16, and I hope we can fix this in time for DSPy 2.5.
That warning and BadRequest errors still pop up for our users.
For example, when using openai-compatible servers like SGLang for llama3-8b locally and setting:
One gets:
IMO, the cost calculator shouldn't be so loud / suggest to users that something is broken "error".
Relevant log output
No response
Twitter / LinkedIn details
No response