BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.96k stars 1.65k forks source link

[Bug]: Proxy: Constant "Provider NOT provided" errors in log on invalid model name, no way to stop them #5853

Open liffiton opened 1 month ago

liffiton commented 1 month ago

What happened?

In the proxy admin UI (v1.44.23 stable), I added an invalid model by mistake*, and now I'm getting constant error messages in the logs with no way I can see to stop them.

The error:

LiteLLM Proxy:ERROR: proxy_server.py:2200 - Error adding/deleting model to llm_router: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
Traceback (most recent call last):
[...]

I see nothing in the proxy admin UI that lets me clean out this invalid model, and the error messages just keep coming every 10 seconds.

*Side note: I'm still not sure how to add a DeepSeek model correctly. Selecting "OpenAI-compatible", then giving the model name as deepseek/deepseek-chat (as in the docs) doesn't seem to work, though maybe it's because the first model I added is stuck in there, preventing later additions from processing?

Relevant log output

No response

Twitter / LinkedIn details

No response

liffiton commented 1 month ago

Update: After deleting the existing database to restart from scratch (luckily I wasn't deep into configuration yet), adding the model correctly with the prefix does work now. So the invalid model does appear to have been blocking the addition of any other models.

ishaan-jaff commented 1 month ago

Adding deepseek on Admin UI is done here @liffiton https://github.com/BerriAI/litellm/pull/5857

ishaan-jaff commented 1 month ago

@liffiton can we hop on a call and learn how we can improve LiteLLM for you? (Saw you're a professor - we're working with some other universities and helping them give students access to LLMs)

my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat my linkedin if you prefer DMs: https://www.linkedin.com/in/reffajnaahsi/

liffiton commented 1 month ago

Just to be clear, the issue here isn't specific to DeepSeek. The problem is what happens when you enter any invalid model in the admin UI: preventing further additions and spamming the logs.

And if it's alright with you, I strongly prefer email. Feel free to email me if you have questions for me or want to learn more about how I'm using LiteLLM.

ishaan-jaff commented 1 month ago

Can we setup a slack support channel ? @liffiton What's the best email to send an email to ?

ishaan-jaff commented 1 month ago

Just to be clear, the issue here isn't specific to DeepSeek. The problem is what happens when you enter any invalid model in the admin UI: preventing further additions and spamming the logs.

Tracking two issues before closing this

liffiton commented 1 month ago

My university email would be fine (don't care to post it here, but it's easy to find).