All-Hands-AI / OpenHands

šŸ™Œ OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
33.08k stars 3.79k forks source link

Getting OpenAIException when not using OpenAI #439

Closed vdsasi closed 5 months ago

vdsasi commented 6 months ago

image

image

    PLAN:

    šŸ”µ 0 write snake game in python using pygame

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    Error condensing thoughts: OpenAIException - 404 page not found

Traceback (most recent call last): File "/home/sasi/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion raise e File "/home/sasi/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 335, in completion response = openai_client.chat.completions.create(*data, timeout=timeout) # type: ignore File "/home/sasi/.local/lib/python3.10/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(args, **kwargs) File "/home/sasi/.local/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 667, in create return self._post( File "/home/sasi/.local/lib/python3.10/site-packages/openai/_base_client.py", line 1208, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/home/sasi/.local/lib/python3.10/site-packages/openai/_base_client.py", line 897, in request return self._request( File "/home/sasi/.local/lib/python3.10/site-packages/openai/_base_client.py", line 988, in _request raise self._make_status_error_from_response(err.response) from None openai.NotFoundError: 404 page not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/sasi/.local/lib/python3.10/site-packages/litellm/main.py", line 990, in completion raise e File "/home/sasi/.local/lib/python3.10/site-packages/litellm/main.py", line 963, in completion response = openai_chat_completions.completion( File "/home/sasi/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 382, in completion raise OpenAIError(status_code=e.status_code, message=str(e)) litellm.llms.openai.OpenAIError: 404 page not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/sasi/OpenDevin/agenthub/langchains_agent/utils/monologue.py", line 30, in condense resp = llm.completion(messages=messages) File "/home/sasi/.local/lib/python3.10/site-packages/litellm/utils.py", line 2807, in wrapper raise e File "/home/sasi/.local/lib/python3.10/site-packages/litellm/utils.py", line 2705, in wrapper result = original_function(*args, **kwargs) File "/home/sasi/.local/lib/python3.10/site-packages/litellm/main.py", line 2094, in completion raise exception_type( File "/home/sasi/.local/lib/python3.10/site-packages/litellm/utils.py", line 8296, in exception_type raise e File "/home/sasi/.local/lib/python3.10/site-packages/litellm/utils.py", line 7112, in exception_type raise NotFoundError( litellm.exceptions.NotFoundError: OpenAIException - 404 page not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/sasi/OpenDevin/opendevin/controller/agent_controller.py", line 89, in step action = self.agent.step(self.state) File "/home/sasi/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 124, in step self._add_event(prev_action.to_dict()) File "/home/sasi/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 74, in _add_event self.monologue.condense(self.llm) File "/home/sasi/OpenDevin/agenthub/langchains_agent/utils/monologue.py", line 35, in condense raise RuntimeError(f"Error condensing thoughts: {e}") RuntimeError: Error condensing thoughts: OpenAIException - 404 page not found

    OBSERVATION:
    Error condensing thoughts: OpenAIException - 404 page not found

Exited before finishing

Dont know why it is raising this exception. I am running local llm via olllama and got this error.

enyst commented 6 months ago

Do you have LLM_EMBEDDING_MODEL set and to what value? You may want to see here https://github.com/OpenDevin/OpenDevin?tab=readme-ov-file#picking-a-model

vdsasi commented 6 months ago

Do you have LLM_EMBEDDING_MODEL set and to what value? You may want to see here https://github.com/OpenDevin/OpenDevin?tab=readme-ov-file#picking-a-model

yes, as mentioned in the README.md file i set value to llama2. By the way, i am also using the llama2 model via ollama to run opendevin.

rbren commented 6 months ago

@vdsasi can you share your config.toml and env variables (redacting any API keys)?

Something is still pointing to OpenAI (which is the default for LLM_MODEL)

vdsasi commented 6 months ago

@vdsasi can you share your config.toml and env variables (redacting any API keys)?

Something is still pointing to OpenAI (which is the default for LLM_MODEL)

LLM_BASE_URL="http://localhost:11434" LLM_EMBEDDING_MODEL="llama2" LLM_MODEL="ollama/llama2"

This are the env variables that i assigned for running with ollama.

rbren commented 6 months ago

This does look right to me...

vdsasi commented 6 months ago

This does look right to me...

Even all seems fine it is not working and giving this error of OpenAIException

rbren commented 6 months ago

Yeah sorry I wasn't clear--I'm still confused as to what's happening, why you're getting this error.

You could try adding some logs to llm.py to see if it's getting initialized with other variables. Otherwise we'll need to wait for someone to try and repro this.

Getting ollama working does seem to be pretty challenging, so we have another ticket open to add docs for it specifically

vdsasi commented 6 months ago

Yeah sorry I wasn't clear--I'm still confused as to what's happening, why you're getting this error.

You could try adding some logs to llm.py to see if it's getting initialized with other variables. Otherwise we'll need to wait for someone to try and repro this.

Getting ollama working does seem to be pretty challenging, so we have another ticket open to add docs for it specifically

image

I think this helps. the llm even though i am setting value of LLM_MODEL=ollama/llama2. It is not changing the value of that. There is problem with setting the environment variables.

image

self.model = "ollama/llama2" self.api_key = api_key if api_key else DEFAULT_API_KEY self.base_url = base_url if base_url else DEFAULT_BASE_URL self._debug_dir = debug_dir if debug_dir else PROMPT_DEBUG_DIR

I hardcoded the value of self.model of llm.py LLM init() method to ollama/llama2. then it is also changing the self.api_key value also even i didnt change. This is one observation i made.

rbren commented 6 months ago

We've done a bunch of work over the last week here. @vdsasi can you give it another shot with a fresh install?

enyst commented 6 months ago

I hardcoded the value of self.model of llm.py LLM init() method to ollama/llama2. then it is also changing the self.api_key value also even i didnt change. This is one observation i made.

There were a couple of times in the past when the UI was sending outdated values. So if you had openai for a single run for some reason, then changed to llama for new runs, it wasn't picking up the change, the UI was sending openai values back to backend. That could explain this, and clearing local storage should fix it.

This particular behavior has been fixed in main, and I believe it has been changed recently, too... not sure, possibly more than one time since this issue was happening.

davidceka commented 5 months ago

If I may add up to this conversation, i cleaned my install removing the folder and cloning it again, trying to run it with the docker solution proposed but i keep getting, like @vdsasi , an OpenAI error when specifically passed this input: docker run \ --add-host host.docker.internal=host-gateway \ -e LLM_API_KEY="ollama" \ -e LLM_MODEL="ollama/codellama:7b" \ -e LLM_EMBEDDING_MODEL="codellama:7b" \ -e LLM_BASE_URL="http://host.docker.internal:11434" \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -e SANDBOX_TYPE=exec \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ ghcr.io/opendevin/opendevin:0.4.0

EDIT: trying also with the non docker version results on it choosing as a MonologueAgent gpt-3.5-turbo.

The config.toml is composed like this image

enyst commented 5 months ago

@davidceka Can you please try to: run, then go to the UI settings and enter the model there. If you can't find it in the list, just enter it as you have it.

Although I would also suggest to take a look at this documentation we have tried to gather on local LLMs, it seems you need the full model name as returned by ollama list: https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/LocalLLMs.md

The reason for the suggestion to enter the UI setting is that we have changed intended behavior recently: the model in the toml file will no longer be used when you are running with an UI, it's the UI setting that will be applied.

davidceka commented 5 months ago

@davidceka Can you please try to: run, then go to the UI settings and enter the model there. If you can't find it in the list, just enter it as you have it.

Although I would also suggest to take a look at this documentation we have tried to gather on local LLMs, it seems you need the full model name as returned by ollama list: https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/LocalLLMs.md

The reason for the suggestion to enter the UI setting is that we have changed intended behavior recently: the model in the toml file will no longer be used when you are running with an UI, it's the UI setting that will be applied.

Using the UI works really well now, the model is correctly loaded and communicates. It still crashes sometimes but I think it's due to the limitations of my hardware. Thank you

enyst commented 5 months ago

Thank you for sharing @davidceka !

This issue is old and the project has changed significantly, including on this exact matter. I'll close it, I think we have others on OpenAI exceptions for unclear reasons so we can track them there if they still exist, but also, the behavior changes in opendevin are so significant that we need to do it on updated installations.

@rbren @vdsasi please feel free to reopen if you don't think this is warranted or it's still happening.

vdsasi commented 5 months ago

Yeah! It's working fine now.