Closed marcloeb closed 9 months ago
Hey @marcloeb how're you starting the litellm server? is there a config involved?
Any steps for repro for the server would be great.
I just pushed an update for v1.16.17
which should also print out the original model passed in.
Yes I start the litellm server with Litellm — model mistral No config involved
litellm --model ollama/mistral
^ I believe you're missing the provider name
https://docs.litellm.ai/docs/proxy/quick_start#supported-llms
yes, this was the issue. An info like "provider is missing in the model name" would have be helpful. Thanks for pointing this out.
@marcloeb what did you look at to know how to construct the cli command?
I'll update instructions there too
What happened?
I was using autogen with a simple script below.it throws an IndexError: list index out of range
in /Users///litellm/litellm/utils.py there is a bug in lin line File "/Users///litellm/litellm/utils.py", line 3995, in get_llm_provider model = model.split("/", 1)[1]
The autogen File is:
import autogen
config_list_dolphine_mixtral = [ { 'base_url': "http://0.0.0.0:36292", 'api_key': "NULL"
]
config_list_mistral = [ { 'base_url': "http://0.0.0.0:8000", 'api_key': "NULL" } ]
llm_config_mixtral= { "config_list": config_list_dolphine_mixtral,
}
llm_config_mistral={ "config_list": config_list_mistral, }
assistant = autogen.AssistantAgent( name="Assistant", llm_config = llm_config_mistral, )
coder = autogen.AssistantAgent( name="Coder", llm_config = llm_config_mixtral, )
user_proxy = autogen.UserProxyAgent( name="user_proxy", human_input_mode="TERMINATE", max_consecutive_auto_reply=10, is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"), code_execution_config={"work_dir": "web"}, llm_config=llm_config_mistral, system_message="""Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.""" )
task = """ Tell me a joke! """
groupchat = autogen.GroupChat(agents=[user_proxy, coder, assistant], messages=[], max_round=12) manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config_mistral) user_proxy.initiate_chat(manager, message=task)
Relevant log output
Twitter / LinkedIn details
No response