BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.32k stars 1.56k forks source link

[Bug]: IndexError: list index out of range version 1.16.16 #1351

Closed marcloeb closed 9 months ago

marcloeb commented 9 months ago

What happened?

I was using autogen with a simple script below.it throws an IndexError: list index out of range

in /Users///litellm/litellm/utils.py there is a bug in lin line File "/Users///litellm/litellm/utils.py", line 3995, in get_llm_provider model = model.split("/", 1)[1]

The autogen File is:

import autogen

config_list_dolphine_mixtral = [ { 'base_url': "http://0.0.0.0:36292", 'api_key': "NULL"

}

]

config_list_mistral = [ { 'base_url': "http://0.0.0.0:8000", 'api_key': "NULL" } ]

llm_config_mixtral= { "config_list": config_list_dolphine_mixtral,
}

llm_config_mistral={ "config_list": config_list_mistral, }

assistant = autogen.AssistantAgent( name="Assistant", llm_config = llm_config_mistral, )

coder = autogen.AssistantAgent( name="Coder", llm_config = llm_config_mixtral, )

user_proxy = autogen.UserProxyAgent( name="user_proxy", human_input_mode="TERMINATE", max_consecutive_auto_reply=10, is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"), code_execution_config={"work_dir": "web"}, llm_config=llm_config_mistral, system_message="""Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.""" )

task = """ Tell me a joke! """

groupchat = autogen.GroupChat(agents=[user_proxy, coder, assistant], messages=[], max_round=12) manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config_mistral) user_proxy.initiate_chat(manager, message=task)

Relevant log output

Traceback (most recent call last):
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 5605, in exception_type
    request=original_exception.request,
            ^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'IndexError' object has no attribute 'request'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/marcloeb/hardhat/litellm/litellm/proxy/proxy_server.py", line 1431, in chat_completion
    response = await litellm.acompletion(**data)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 2354, in wrapper_async
    raise e
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 2246, in wrapper_async
    result = await original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/marcloeb/hardhat/litellm/litellm/main.py", line 227, in acompletion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 6572, in exception_type
    raise original_exception
  File "/Users/marcloeb/hardhat/litellm/litellm/main.py", line 185, in acompletion
    _, custom_llm_provider, _, _ = get_llm_provider(
                                   ^^^^^^^^^^^^^^^^^
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 4123, in get_llm_provider
    raise e
  File "/Users/marcloeb/hardhat/litellm/litellm/utils.py", line 3995, in get_llm_provider
    model = model.split("/", 1)[1]
            ~~~~~~~~~~~~~~~~~~~^^^
IndexError: list index out of range
INFO:     127.0.0.1:58919 - "POST /chat/completions HTTP/1.1" 500 Internal Server Error

Twitter / LinkedIn details

No response

krrishdholakia commented 9 months ago

Hey @marcloeb how're you starting the litellm server? is there a config involved?

Any steps for repro for the server would be great.

I just pushed an update for v1.16.17 which should also print out the original model passed in.

marcloeb commented 9 months ago

Yes I start the litellm server with Litellm — model mistral No config involved

krrishdholakia commented 9 months ago

litellm --model ollama/mistral

^ I believe you're missing the provider name

https://docs.litellm.ai/docs/proxy/quick_start#supported-llms

marcloeb commented 9 months ago

yes, this was the issue. An info like "provider is missing in the model name" would have be helpful. Thanks for pointing this out.

krrishdholakia commented 9 months ago

@marcloeb what did you look at to know how to construct the cli command?

I'll update instructions there too