Open star8618 opened 2 days ago
Traceback (most recent call last):
File "/app/openhands/controller/agent_controller.py", line 195, in start_step_loop
await self._step()
File "/app/openhands/controller/agent_controller.py", line 468, in _step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/openhands/agenthub/micro/agent.py", line 77, in step
resp = self.llm.completion(
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 336, in wrapped_f
return copy(f, *args, kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 475, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 398, in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
02:08:06 - openhands:ERROR: agent_controller.py:201 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Error while running the agent: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
02:08:06 - openhands:INFO: agent_controller.py:307 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Setting agent(VerifierAgent) state from AgentState.RUNNING to AgentState.ERROR
02:08:06 - openhands:INFO: agent_controller.py:307 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Setting agent(VerifierAgent) state from AgentState.ERROR to AgentState.ERROR
INFO: 172.17.0.1:63318 - "GET /assets/_oh.app.browser-6qiMwjoA.js HTTP/1.1" 200 OK
02:08:25 - openhands:INFO: github.py:15 - Initializing UserVerifier
02:08:25 - openhands:INFO: github.py:28 - GITHUB_USER_LIST_FILE not configured
02:08:25 - openhands:INFO: github.py:49 - GITHUB_USERS_SHEET_ID not configured
02:08:25 - openhands:INFO: github.py:86 - No user verification sources configured - allowing all users
Hello. Based on the litellm docs: https://docs.litellm.ai/docs/providers/xai I can see that you passed in the model correctly. Are you using a llm proxy by any chance? Or are you using grok straight from xai platform?
same problem, i tried even passsing base url, but same error.
@shadowvvf can you please mention what you put for your model? And I'm guessing you specified the base_url because you are using a proxy? Reason I'm asking is because sometimes you have to prefix the model. For example if you use openrouter, it would be: openrouter/anthropic/claude...
I just tried it, we added it to the UI recently and this seems to work as expected:
As @mamoodi mentioned, base_url
is optional in this case, litellm will route it to x.ai.
i also got
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)Learn more: https://docs.litellm.ai/docs/providers
i just i AI Provider Configuration using xai and grok-beta and then enter API Key, nothing else
Just Configuration using xai and grok-beta and then enter API Key, nothing else
got the same bug
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
I have the same problem, I installed docker on windows11, OpenHands version is 0.13
same problem
It must be that we fixed something very recently, just these days, because it works now on main
. I think we are going to make a new release really soon.
In the meantime, could you try with main
instead of 0.13
?
I think the updated litellm brought in the XAI support
If you all don't mind trying main for now and confirming it works:
docker run -it --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
-e LOG_ALL_EVENTS=true \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:main
Hopefully we will have a release soon.
Can you please try with the new release? Redownloading openhands:0.13
should work, we made a patch.
@star8618 @jordan2816 @ciknight @Goblin-wt @guangquanshao @shadowvvf
We are using litellm to support many providers, and as Tobi said, it needed an update.
After redownloading openhands:0.13
, xai can run👍, but it takes a long time to download the image when it is started for the first time. It would be better if more information could be given.
@enyst
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersOpenHands Installation
Docker command in README
OpenHands Version
No response
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response