Open adrianzhang opened 1 day ago
I didn't know it's possible to have no model name :D . That's an interesting configuration. What happens if you just put some random string in the model box?
I didn't know it's possible to have no model name :D . That's an interesting configuration. What happens if you just put some random string in the model box?
I put aaa as provider and ccc as model name, then the same error.
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 290, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aaa/ccc
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
06:07:24 - openhands:ERROR: agent_controller.py:209 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Error while running the agent: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=aaa/ccc
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
06:07:24 - openhands:INFO: agent_controller.py:323 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Setting agent(CodeActAgent) state from AgentState.RUNNING to AgentState.ERROR
06:07:24 - openhands:INFO: agent_controller.py:323 - [Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] Setting agent(CodeActAgent) state from AgentState.ERROR to AgentState.ERROR
06:07:24 - OBSERVATION
[Agent Controller e3d605ec-3f77-4e38-97f4-7ac72ff47b5a] AgentStateChangedObservation(content='', agent_state=<AgentState.ERROR: 'error'>, observation='agent_state_changed')
The settings of the Openhands is so hard to understand. Why it defines several available providers and models, and even there is a list of models when choosing ollama as provider? Developers using ollama always define their own models, or try the newest models eg. qwen2.5. Ollama works with these models very well, but Openhands limits lots of scenarios.
As my understanding of Openhands, it should be a simple prompt agent which is responsible for translating code instructions to prompt words and then send to any backend LLM provider, ChatGPT, Claude, X-AI, .... and locally Ollama, llama.cpp, or other funny agents... However, obviously I was wrong. Could you please let me know where I was wrong??
Hi @adrianzhang, sorry for the frustration! You were actually not wrong, but please lets figure out how to do this.
OpenHands uses the liteLLM SDK undercover to support a lot of LLM APIs/providers, including custom ones. LiteLLM is not an inference engine, but it can work with ollama, LMStudio, and llama.cpp just the same, as long as you give liteLLM (via our settings) an endpoint to call.
An important detail, which should make everything easier, is if this endpoint to call is an openai-compatible endpoint. In my understanding, llama.cpp can do that. Then litelllm needs to receive a base_url
and a model name that has "openai/" prefix, then it will know to route the call to that url and it will add the right suffix (/chat/completions etc).
Does that make sense?
Hi @adrianzhang, sorry for the frustration! You were actually not wrong, but please lets figure out how to do this.
OpenHands uses the liteLLM SDK undercover to support a lot of LLM APIs/providers, including custom ones. LiteLLM is not an inference engine, but it can work with ollama, LMStudio, and llama.cpp just the same, as long as you give liteLLM (via our settings) an endpoint to call.
An important detail, which should make everything easier, is if this endpoint to call is an openai-compatible endpoint. In my understanding, llama.cpp can do that. Then litelllm needs to receive a
base_url
and a model name that has "openai/" prefix, then it will know to route the call to that url and it will add the right suffix (/chat/completions etc).Does that make sense?
Hi @enyst ,
Thank you so much for detailed explanation.
I assume the root cause of the problem is that Openhands doesn't use Ollama API to query its models before transfer parameters to the liteLLM.
I'v read liteLLM source code after got your reply, and found that it will verify parameters (of cause security / robust reason). So the liteLLM 's function is correct. What Openhands can do is to query model names from Ollama before sending model name parameter to liteLLM.
First of all, Ollama is based on llama.cpp, so they typically provide lots of similar functions. Same solution is for both.
Second, in Ollama API document, there is a function: GET /api/tags
which response model name list. I think it is helpful to get the real model list from Ollama instance.
What do you think?
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
When I create a local LLM service with llama.cpp, I have verified it works great without any key or model name. However, when I set the same URL, Openhands always shows error in console of docker.
I am considering users have no chance to use local model except Ollama?
OpenHands Installation
Docker command in README
OpenHands Version
latest docker image
Operating System
Linux
Logs, Errors, Screenshots, and Additional Context