Closed foragerr closed 6 months ago
This list needs an update? https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py#L431
Also this entry in the model_list
is broken, should be two separate lines.
"daanelson/flan-t5-large:ce962b3f6792a57074a601d3979db5839697add2e4e02696b3ced4c022d4767freplicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5",
Missing comma here? https://github.com/BerriAI/litellm/blob/main/litellm/__init__.py#L351
More generally, https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json has 308 entries, vs litellm.model_list
returns 246 entries. Are 62 entries missing?
are you looking for the model cost map - litellm.model_cost
?
https://docs.litellm.ai/docs/completion/token_usage#7-model_cost
here's how model_list is initialized - let me know if you see any gaps in implementation: https://github.com/BerriAI/litellm/blob/c35b4c9b80d9cd7d61bfa1120e48e30c295cc68c/litellm/__init__.py#L431
I'm looking for a full list of models supported by litellm. I assumed, perhaps mistakenly, that llmlite.model_list
is supposed to provide that.
I'm happy to just use litellm.model_cost.keys()
if you think that is the right direction.
I had raised this PR: https://github.com/BerriAI/litellm/pull/2806
to add gemini models to model_list
and to fix a missing comma elsewhere.
Hey @krrishdholakia @ishaan-jaff, There are still some differences between litellm.model_cost.keys()
and llmlite.model_list
. Could you comment on which should be treated as the source of truth for models supported by litellm ?
Hey @foragerr litellm supports providers
For example - you can call any model on together ai through litellm.
The model list is just a list of specific popular models we're tracking for cost etc.
If you want to see which providers are supported by litellm, you can use litellm.provider_list
If you can share specific gaps, i can also investigate those
import litellm
models_from_model_cost = litellm.model_cost.keys()
models_from_model_list = litellm.model_list
# print count of models commom in both model_cost and model_list
print(f"Common models: {len(set(models_from_model_cost) & set(models_from_model_list))}")
# print count of models unique to model_cost
print(f"Models in model_cost, but not model_list: {len(set(models_from_model_cost) - set(models_from_model_list))}")
# print count of models unique to model_list
print(f"Models in model_list, but not model_cost: {len(set(models_from_model_list) - set(models_from_model_cost))}")
Common models: 206
Models in model_cost, but not model_list: 102
Models in model_list, but not model_cost: 45
Your comment about supporting providers rather than models makes sense.
Over in the OpenDevin repo, model
is dynamically set in the call
response = litellm.completion(
model=some_model,
messages=[{"role": "user", "content": "write code for saying hi from LiteLLM"}]
)
and it would be nice to have a list of allowed values for model
, and they're leaning towards using
list(set(litellm.model_list) | set(litellm.model_cost.keys()))
What happened?
This snippet works
but
litellm.model_list
response does not containgemini/gemini-pro
Relevant log output
for
litellm==1.34.21
,litellm.model_list
returnsTwitter / LinkedIn details
No response