Closed okhat closed 1 month ago
As a suggestion, shouldn't the warning above occur only once? And then on future requests we shouldn't get the same warning about the same issue in the same run?
More generally, many people will ultimately use models that have not been added to the cost calculator. Should we really bombard them with warnings over that, on every request? I'm a lot less familiar than you guys are, but it seems like cost calculation should just return None
silently if not available. There's nothing too urgent about the lack of cost calculation that demands a long and loud warning.
@okhat open to suggestions here
the motivation was to make litellm more observable. since we support returning the calculated response cost in the response - response._hidden_params["response_cost"]
, this error shows up - which indicates why it might not be getting calculated correctly.
Possible ideas (open to your suggestions too):
litellm._logging._disable_debugging
looks like we already have support for treating it as a debug error - i'll move to just doing this @okhat
Will also add the missing databricks model prices
Databricks has moved their pricing to be in DBUs. So what we can do is store the dbu information, and apply a default conversion (which can be overridden by the user).
All of these sound good to me! Thanks a ton @krrishdholakia
What happened?
This is an extension of #5597, which I can't re-open.
The fix by @krrishdholakia was great but it doesn't yet handle most models by Databricks, only one LM. Can we consider a more general fix? This is less pressing now because we only get a warning, not an error.
When using model=
databricks/databricks-meta-llama-3-1-70b-instruct
, the error complains aboutdatabricks/meta-llama-3.1-70b-instruct-082724
. The list ofdatabricks/*
models at this path https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json is great for reference.Relevant log output
Twitter / LinkedIn details
No response