All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
36.23k stars 4.13k forks source link

[Bug]: xai/grok-beta model getting litellm.BadRequestError #4882

Open star8618 opened 2 days ago

star8618 commented 2 days ago

Is there an existing issue for the same bug?

Describe the bug and reproduction steps

litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

OpenHands Installation

Docker command in README

OpenHands Version

No response

Operating System

None

Logs, Errors, Screenshots, and Additional Context

No response

star8618 commented 2 days ago

Traceback (most recent call last): File "/app/openhands/controller/agent_controller.py", line 195, in start_step_loop await self._step() File "/app/openhands/controller/agent_controller.py", line 468, in _step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/openhands/agenthub/micro/agent.py", line 77, in step resp = self.llm.completion( ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 336, in wrapped_f return copy(f, *args, kw) ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 475, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 376, in iter result = action(retry_state) ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 398, in self._add_action_func(lambda rs: rs.outcome.result()) ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 478, in call result = fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "/app/openhands/llm/llm.py", line 196, in wrapper resp: ModelResponse = completion_unwrapped(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1013, in wrapper raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 903, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2999, in completion raise exception_type( File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 906, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 313, in get_llm_provider raise e File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 290, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers 02:08:06 - openhands:ERROR: agent_controller.py:201 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Error while running the agent: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers 02:08:06 - openhands:INFO: agent_controller.py:307 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Setting agent(VerifierAgent) state from AgentState.RUNNING to AgentState.ERROR 02:08:06 - openhands:INFO: agent_controller.py:307 - [Agent Controller 0a757a3e-475f-48b2-87c9-e68ccced5c3d] Setting agent(VerifierAgent) state from AgentState.ERROR to AgentState.ERROR INFO: 172.17.0.1:63318 - "GET /assets/_oh.app.browser-6qiMwjoA.js HTTP/1.1" 200 OK 02:08:25 - openhands:INFO: github.py:15 - Initializing UserVerifier 02:08:25 - openhands:INFO: github.py:28 - GITHUB_USER_LIST_FILE not configured 02:08:25 - openhands:INFO: github.py:49 - GITHUB_USERS_SHEET_ID not configured 02:08:25 - openhands:INFO: github.py:86 - No user verification sources configured - allowing all users

mamoodi commented 2 days ago

Hello. Based on the litellm docs: https://docs.litellm.ai/docs/providers/xai I can see that you passed in the model correctly. Are you using a llm proxy by any chance? Or are you using grok straight from xai platform?

shadowvvf commented 1 day ago

same problem, i tried even passsing base url, but same error. image

mamoodi commented 1 day ago

@shadowvvf can you please mention what you put for your model? And I'm guessing you specified the base_url because you are using a proxy? Reason I'm asking is because sometimes you have to prefix the model. For example if you use openrouter, it would be: openrouter/anthropic/claude...

enyst commented 1 day ago

I just tried it, we added it to the UI recently and this seems to work as expected:

Screenshot 2024-11-12 at 00 56 31

As @mamoodi mentioned, base_url is optional in this case, litellm will route it to x.ai.

jordan2816 commented 1 day ago

i also got litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)Learn more: https://docs.litellm.ai/docs/providers

i just i AI Provider Configuration using xai and grok-beta and then enter API Key, nothing else

guangquanshao commented 1 day ago

Just Configuration using xai and grok-beta and then enter API Key, nothing else got the same bug 微信图片_20241112114327 微信图片_20241112114330 litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=xai/grok-beta Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

ciknight commented 1 day ago

I have the same problem, I installed docker on windows11, OpenHands version is 0.13

Goblin-wt commented 1 day ago

same problem

enyst commented 1 day ago

It must be that we fixed something very recently, just these days, because it works now on main. I think we are going to make a new release really soon.

In the meantime, could you try with main instead of 0.13?

tobitege commented 1 day ago

I think the updated litellm brought in the XAI support

mamoodi commented 1 day ago

If you all don't mind trying main for now and confirming it works:

docker run -it --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    -e LOG_ALL_EVENTS=true \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:main

Hopefully we will have a release soon.

enyst commented 15 hours ago

Can you please try with the new release? Redownloading openhands:0.13 should work, we made a patch. @star8618 @jordan2816 @ciknight @Goblin-wt @guangquanshao @shadowvvf

We are using litellm to support many providers, and as Tobi said, it needed an update.

ciknight commented 1 hour ago

After redownloading openhands:0.13, xai can run👍, but it takes a long time to download the image when it is started for the first time. It would be better if more information could be given. image @enyst