Closed zhonggegege closed 5 months ago
Use curl http://192.168.0.93:1234/v1 which can connect normally.
set LLM_MODEL
to "openai/lm-studio"
log----------------------------------- (opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -e SANDBOX_TYPE=exec -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal=host-gateway ghcr.io/opendevin/opendevin:0.4.0 INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) INFO: 172.17.0.1:52160 - "GET /index.html HTTP/1.1" 304 Not Modified INFO: ('172.17.0.1', 52170) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI2YzNhZmY0OC1mZDIwLTRmNjAtYmZhOS0yYmY3OTk3NDJlNDQifQ.dr-5Izu4B2Ziz0plH-KU7DCSNHL2sue7FU-x77iOEJk" [accepted] INFO: connection open Starting loop_recv for sid: 6c3aff48-fd20-4f60-bfa9-2bf799742e44 INFO: 172.17.0.1:52160 - "GET /locales/zh/translation.json HTTP/1.1" 404 Not Found INFO: 172.17.0.1:52160 - "GET /api/refresh-files HTTP/1.1" 200 OK 04:01:10 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM anyscale/meta-llama/Llama-2-70b-chat-hf 04:01:10 - opendevin:INFO: llm.py:51 - Initializing LLM with model: anyscale/meta-llama/Llama-2-70b-chat-hf 04:01:10 - opendevin:INFO: exec_box.py:221 - Container stopped 04:01:10 - opendevin:INFO: exec_box.py:239 - Container started INFO: 172.17.0.1:52166 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 172.17.0.1:52160 - "GET /api/litellm-models HTTP/1.1" 200 OK INFO: 172.17.0.1:52160 - "GET /api/agents HTTP/1.1" 200 OK 04:01:39 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 04:01:39 - opendevin:INFO: llm.py:51 - Initializing LLM with model: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 04:01:50 - opendevin:INFO: exec_box.py:221 - Container stopped 04:01:50 - opendevin:INFO: exec_box.py:239 - Container started 04:05:54 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 04:05:54 - opendevin:INFO: llm.py:51 - Initializing LLM with model: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 04:06:05 - opendevin:INFO: exec_box.py:221 - Container stopped 04:06:05 - opendevin:INFO: exec_box.py:239 - Container started 04:06:22 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16 04:06:22 - opendevin:INFO: llm.py:51 - Initializing LLM with model: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16 04:06:33 - opendevin:INFO: exec_box.py:221 - Container stopped 04:06:33 - opendevin:INFO: exec_box.py:239 - Container started 04:08:18 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM MaziyarPanahi/WizardLM-2-7B-GGUF 04:08:18 - opendevin:INFO: llm.py:51 - Initializing LLM with model: MaziyarPanahi/WizardLM-2-7B-GGUF 04:08:29 - opendevin:INFO: exec_box.py:221 - Container stopped 04:08:29 - opendevin:INFO: exec_box.py:239 - Container started
============== STEP 0
04:10:41 - PLAN
1
04:10:41 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #1 | You can customize these settings in the configuration.
04:10:43 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #2 | You can customize these settings in the configuration.
04:10:44 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #3 | You can customize these settings in the configuration.
04:10:46 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #4 | You can customize these settings in the configuration.
04:10:49 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #5 | You can customize these settings in the configuration.
04:10:49 - opendevin:ERROR: agent_controller.py:102 - Error in loop
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 662, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5944, in get_llm_provider
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5931, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model= MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersset
LLM_MODEL
to"openai/lm-studio"
Thank you for your reply. After adding this variable, docker run
still cannot connect to the LM-Studio server. The server did not accept the connection. And it seems that the front-end UI only accepts models in the drop-down options. If you enter any model on the LM-Studio server and save it and then open the settings, the model options are still empty. The terminal shows that the model is set, but in the front-end UI you will find that it changes to a model that you have not selected, which is too weird.
You can edit the model field.
add -e LLM_MODEL="openai/lm-studio"
to the docker command
Thanks for your reply. I tried canceling the variable setting and adding the docker -e command to start, but the result was the same as before. Same error message.
export LLM_API_KEY="lm-studio" export WORKSPACE_BASE=/home/agent/OpenDevin/workspace export LLM_BASE_URL="http://192.168.0.93:1234/v1"
docker run \ -e LLM_API_KEY \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -e LLM_MODEL="openai/lm-studio" \ -e SANDBOX_TYPE=exec \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal=host-gateway \ ghcr.io/opendevin/opendevin:0.4.0
http://192.168.0.93:1234 Still no request logs.
I'm very surprised that in version 0.3.1, without setting the "LLM_MODEL" variable, using the settings and startup methods here, you can normally request the server 192.168.0.93, and the display can get a correct response.
Try running without Docker. https://github.com/OpenDevin/OpenDevin/blob/main/Development.md
After step 3, run poetry run python opendevin/main.py -d ./workspace -t "write bash script to print 5"
make build Done. They still seem to be the same error.
(opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ poetry run python opendevin/main.py -d ./workspace -t "write bash script to print 5"
Setting workspace base to /home/agent/OpenDevin/workspace
Running agent MonologueAgent (model: MaziyarPanahi/WizardLM-2-7B-GGUF) with task: "write bash script to print 5"
17:21:47 - opendevin:INFO: llm.py:52 - Initializing LLM with model: MaziyarPanahi/WizardLM-2-7B-GGUF
17:21:47 - opendevin:INFO: ssh_box.py:353 - Container stopped
17:21:47 - opendevin:WARNING: ssh_box.py:365 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
17:21:47 - opendevin:INFO: ssh_box.py:373 - Mounting workspace directory: /home/agent/OpenDevin/workspace
17:21:48 - opendevin:INFO: ssh_box.py:396 - Container started
17:21:49 - opendevin:INFO: ssh_box.py:413 - waiting for container to start: 1, container status: running
17:21:49 - opendevin:INFO: ssh_box.py:178 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 44715 opendevin@localhost
with the password '3f2b9301-5aa6-4098-8401-8901893f6a27' and report the issue on GitHub.
============== STEP 0
17:21:50 - PLAN write bash script to print 5
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
17:21:50 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #1 | You can customize these settings in the configuration.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
17:21:53 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #2 | You can customize these settings in the configuration.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
17:21:54 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #3 | You can customize these settings in the configuration.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
17:21:56 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #4 | You can customize these settings in the configuration.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
17:22:01 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #5 | You can customize these settings in the configuration.
17:22:01 - opendevin:ERROR: agent_controller.py:103 - Error in loop
Traceback (most recent call last):
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/main.py", line 662, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 5944, in get_llm_provider
raise e
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 5931, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 99, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 212, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/agenthub/monologue_agent/agent.py", line 226, in step
resp = self.llm.completion(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, kw)
^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/tenacity/init.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/tenacity/init.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/concurrent/futures/_base.py", line 401, in get_result
raise self._exception
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, *kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/opendevin/llm/llm.py", line 79, in wrapper
resp = completion_unwrapped(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 2977, in wrapper
raise e
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 2875, in wrapper
result = original_function(*args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/main.py", line 2137, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 8665, in exception_type
raise e
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/litellm/utils.py", line 8633, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
ERROR:root: File "/home/agent/OpenDevin/opendevin/main.py", line 53, in
ERROR:root:<class 'litellm.exceptions.APIConnectionError'>: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers
In config.toml
file set LLM_MODEL="openai/MaziyarPanahi/WizardLM-2-7B-GGUF"
Wow,thank you for your reply. it's working, but only up to step 4.
error log-------------------------------------
(opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ poetry run python opendevin/main.py -d ./workspace -t "write bash script to print 5"
Setting workspace base to /home/agent/OpenDevin/workspace
Running agent MonologueAgent (model: openai/MaziyarPanahi/WizardLM-2-7B-GGUF) with task: "write bash script to print 5"
19:49:59 - opendevin:INFO: llm.py:52 - Initializing LLM with model: openai/MaziyarPanahi/WizardLM-2-7B-GGUF
19:49:59 - opendevin:INFO: ssh_box.py:353 - Container stopped
19:49:59 - opendevin:WARNING: ssh_box.py:365 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
19:49:59 - opendevin:INFO: ssh_box.py:373 - Mounting workspace directory: /home/agent/OpenDevin/workspace
19:50:00 - opendevin:INFO: ssh_box.py:396 - Container started
19:50:01 - opendevin:INFO: ssh_box.py:413 - waiting for container to start: 1, container status: running
19:50:01 - opendevin:INFO: ssh_box.py:178 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 38091 opendevin@localhost
with the password '024b111b-1df9-4997-9ee7-aa735b573dea' and report the issue on GitHub.
============== STEP 0
19:50:02 - PLAN
write bash script to print 5
19:50:11 - ACTION
AgentThinkAction(thought="To write bash script to print 5, I need to create a script file first, and then write the command into that file. First, I'll create an empty file using touch
. Then, I'll write the echo
command into the file using echo
or write
action.", action='think')
============== STEP 1
19:50:11 - PLAN
write bash script to print 5
19:50:18 - ACTION
AgentThinkAction(thought="To create a bash script that prints the number 5, I need to write a shell command that does this, and then save it into a file. The command to print '5' is echo 5
. I will write this command into a new file. Let's call the file 'print_five.sh'. I will use the 'write' action to create this file.", action='think')
============== STEP 2
19:50:18 - PLAN write bash script to print 5 19:50:23 - ACTION FileWriteAction(path='print_five.sh', content='echo 5', start=0, end=-1, thoughts='', action='write') 19:50:23 - OBSERVATION
============== STEP 3
19:50:23 - PLAN
write bash script to print 5
19:50:30 - ACTION
AgentThinkAction(thought="Now that I've created the file 'print_five.sh', I need to make sure it contains the correct command to print the number 5. The command is echo 5
. To verify this, I should use the cat
command to read the contents of the file.", action='think')
============== STEP 4
19:50:30 - PLAN write bash script to print 5 19:50:38 - opendevin:ERROR: agent_controller.py:103 - Error in loop Traceback (most recent call last): File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 360, in repair_json parsed_json = json.loads(json_str) ^^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/json/init.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 7 column 1 (char 220)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 99, in _run finished = await self.step(i) ^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 212, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/agent.py", line 229, in step action = prompts.parse_action_response(action_resp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 165, in parse_action_response action_dict = json.loads(response) ^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/json.py", line 36, in loads s = repair_json(s) ^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 362, in repair_json parsed_json = parser.parse() ^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 40, in parse return self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json return self.parse_object() ^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 126, in parse_object value = self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 80, in parse_json return self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json return self.parse_object() ^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 130, in parse_object obj[key] = value
TypeError: unhashable type: 'dict'
ERROR:root: File "/home/agent/OpenDevin/opendevin/main.py", line 53, in <module>
asyncio.run(main())
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/opendevin/main.py", line 49, in main
await controller.start(task)
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 140, in start
await self._run()
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 104, in _run
raise e
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 99, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 212, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/agenthub/monologue_agent/agent.py", line 229, in step
action = prompts.parse_action_response(action_resp)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 165, in parse_action_response
action_dict = json.loads(response)
^^^^^^^^^^^^^^^^^^^^
File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/json.py", line 36, in loads
s = repair_json(s)
^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 362, in repair_json
parsed_json = parser.parse()
^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 40, in parse
return self.parse_json()
^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json
return self.parse_object()
^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 126, in parse_object
value = self.parse_json()
^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 80, in parse_json
return self.parse_json()
^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json
return self.parse_object()
^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 130, in parse_object
obj[key] = value
~~~^^^^^
ERROR:root:<class 'TypeError'>: unhashable type: 'dict'
According to the above setting idea, I started it using docker. Unfortunately, it still gave an error 401. As long as I enter a task description in the WEB UI, no matter how I set up the model, it still does. error log---------------------------------------------------- export LLM_API_KEY="lm-studio" export WORKSPACE_BASE=/home/agent/OpenDevin/workspace export LLM_BASE_URL="http://192.168.0.93:1234/v1" export LLM_MODEL="openai/MaziyarPanahi/WizardLM-2-7B-GGUF"
(opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -e LLM_MODEL="openai/lm-studio" -e SANDBOX_TYPE=exec -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal=host-gateway ghcr.io/opendevin/opendevin:0.4.0 INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) INFO: 172.17.0.1:57392 - "GET /index.html HTTP/1.1" 304 Not Modified INFO: ('172.17.0.1', 57412) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI2YzNhZmY0OC1mZDIwLTRmNjAtYmZhOS0yYmY3OTk3NDJlNDQifQ.dr-5Izu4B2Ziz0plH-KU7DCSNHL2sue7FU-x77iOEJk" [accepted] INFO: connection open Starting loop_recv for sid: 6c3aff48-fd20-4f60-bfa9-2bf799742e44 INFO: 172.17.0.1:57392 - "GET /api/refresh-files HTTP/1.1" 200 OK 12:08:28 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM openai/MaziyarPanahi/WizardLM-2-7B-GGUF 12:08:28 - opendevin:INFO: llm.py:51 - Initializing LLM with model: openai/MaziyarPanahi/WizardLM-2-7B-GGUF 12:08:28 - opendevin:INFO: exec_box.py:221 - Container stopped 12:08:28 - opendevin:INFO: exec_box.py:239 - Container started INFO: 172.17.0.1:57408 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 172.17.0.1:57392 - "GET /api/litellm-models HTTP/1.1" 200 OK INFO: 172.17.0.1:57392 - "GET /api/agents HTTP/1.1" 200 OK
============== STEP 0
12:08:55 - PLAN Use python to write a snake game 12:08:57 - opendevin:ERROR: agent_controller.py:102 - Error in loop Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 414, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 373, in completion response = openai_client.chat.completions.create(*data, timeout=timeout) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 277, in wrapper return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 581, in create return self._post( ^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1232, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 921, in request return self._request( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1012, in _request raise self._make_status_error_from_response(err.response) from None openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1010, in completion raise e File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 983, in completion response = openai_chat_completions.completion( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai.py", line 420, in completion raise OpenAIError(status_code=e.status_code, message=str(e)) litellm.llms.openai.OpenAIError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/opendevin/controller/agent_controller.py", line 98, in _run finished = await self.step(i) ^^^^^^^^^^^^^^^^^^ File "/app/opendevin/controller/agent_controller.py", line 211, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/agenthub/monologue_agent/agent.py", line 218, in step resp = self.llm.completion(messages=messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, kw) ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 382, in call result = fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "/app/opendevin/llm/llm.py", line 78, in wrapper resp = completion_unwrapped(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2977, in wrapper raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2875, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2137, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8665, in exception_type raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 7453, in exception_type raise AuthenticationError( litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
-e
. Does the model in WEB UI need to be set? Set it up as you said: docker run \ -e LLM_API_KEY \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -e LLM_MODEL="openai/lm-studio" \ -e SANDBOX_TYPE=exec \ -e LLM_MODEL="openai/MaziyarPanahi/WizardLM-2-7B-GGUF" \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal=host-gateway \ ghcr.io/opendevin/opendevin:0.4.0 error log----------------------------- litellm.exceptions.AuthenticationError: OpenAIException - Error code: 401 - {'error': {'message': 'Incorrect API key provided: lm-studio. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Check the logs folder for the error (due to the quality of LLM).
19:49:59 - opendevin:INFO: llm.py:52 - Initializing LLM with model: openai/MaziyarPanahi/WizardLM-2-7B-GGUF
19:49:59 - opendevin:INFO: ssh_box.py:353 - Container stopped
19:49:59 - opendevin:WARNING: ssh_box.py:365 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
19:49:59 - opendevin:INFO: ssh_box.py:373 - Mounting workspace directory: /home/agent/OpenDevin/workspace
19:50:00 - opendevin:INFO: ssh_box.py:396 - Container started
19:50:01 - opendevin:INFO: ssh_box.py:413 - waiting for container to start: 1, container status: running
19:50:01 - opendevin:INFO: ssh_box.py:178 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 38091 opendevin@localhost
with the password '024b111b-1df9-4997-9ee7-aa735b573dea' and report the issue on GitHub.
19:50:02 - opendevin:INFO: agent_controller.py:197 - STEP 0
19:50:02 - opendevin:INFO: agent_controller.py:198 - write bash script to print 5
19:50:11 - opendevin:INFO: agent_controller.py:217 - AgentThinkAction(thought="To write bash script to print 5, I need to create a script file first, and then write the command into that file. First, I'll create an empty file using touch
. Then, I'll write the echo
command into the file using echo
or write
action.", action='think')
19:50:11 - opendevin:INFO: agent_controller.py:197 - STEP 1
19:50:11 - opendevin:INFO: agent_controller.py:198 - write bash script to print 5
19:50:18 - opendevin:INFO: agent_controller.py:217 - AgentThinkAction(thought="To create a bash script that prints the number 5, I need to write a shell command that does this, and then save it into a file. The command to print '5' is echo 5
. I will write this command into a new file. Let's call the file 'print_five.sh'. I will use the 'write' action to create this file.", action='think')
19:50:18 - opendevin:INFO: agent_controller.py:197 - STEP 2
19:50:18 - opendevin:INFO: agent_controller.py:198 - write bash script to print 5
19:50:23 - opendevin:INFO: agent_controller.py:217 - FileWriteAction(path='print_five.sh', content='echo 5', start=0, end=-1, thoughts='', action='write')
19:50:23 - opendevin:INFO: agent_controller.py:233 -
19:50:23 - opendevin:INFO: agent_controller.py:197 - STEP 3
19:50:23 - opendevin:INFO: agent_controller.py:198 - write bash script to print 5
19:50:30 - opendevin:INFO: agent_controller.py:217 - AgentThinkAction(thought="Now that I've created the file 'print_five.sh', I need to make sure it contains the correct command to print the number 5. The command is echo 5
. To verify this, I should use the cat
command to read the contents of the file.", action='think')
19:50:30 - opendevin:INFO: agent_controller.py:197 - STEP 4
19:50:30 - opendevin:INFO: agent_controller.py:198 - write bash script to print 5
19:50:38 - opendevin:ERROR: agent_controller.py:103 - Error in loop
Traceback (most recent call last):
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 360, in repair_json
parsed_json = json.loads(json_str)
^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/json/init.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/agent/miniconda3/envs/opendev/lib/python3.11/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 7 column 1 (char 220)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 99, in _run finished = await self.step(i) ^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/opendevin/controller/agent_controller.py", line 212, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/agent.py", line 229, in step action = prompts.parse_action_response(action_resp) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/prompts.py", line 165, in parse_action_response action_dict = json.loads(response) ^^^^^^^^^^^^^^^^^^^^ File "/home/agent/OpenDevin/agenthub/monologue_agent/utils/json.py", line 36, in loads s = repair_json(s) ^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 362, in repair_json parsed_json = parser.parse() ^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 40, in parse return self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json return self.parse_object() ^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 126, in parse_object value = self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 80, in parse_json return self.parse_json() ^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 52, in parse_json return self.parse_object() ^^^^^^^^^^^^^^^^^^^ File "/home/agent/miniconda3/envs/opendev/lib/python3.11/site-packages/json_repair/json_repair.py", line 130, in parse_object obj[key] = value
TypeError: unhashable type: 'dict'
you need to pass LLM_BASE_URL
too
I'm so sorry that I even lost it. Now it starts working properly. I'm so sorry that I even lost it. Now it starts working properly. Thanks again.
@zhonggegege your last attempt worked, as far as the APIConnectionError is concerned. It now connected successfully, and it started the task. It executed several steps. So please note that this way is how you make it work. (Yes, the web UI needs the model)
It encountered a different error later, one about JSON, that's not the same thing... The LLM quality matters, unfortunately the LLM you're using didn't seem to obey instructions and probably sent something it shouldn't have.
I think that in this behavior you're seeing now, there's a bug in that on opendevin side, too, will fix that. Please note though, some tasks might not complete as you wish with various LLMs anyway... Try again or try other LLMs too, you can set them up in a similar way.
Thanks for your reply, I understand. However, in the above successful attempt, I did not set the model in the WEB UI, because many previous attempts to fill in the customized model address were successfully sent to the terminal and enabled, but the model settings here were never displayed properly on the WEB UI. Fill in the model path. Currently, I am eager to connect to the LLM server. I will try other models many times and feedback some useful information. Thank you lovely people. ^^
the model settings here were never displayed properly on the WEB UI
Ah, I know what you mean, you are absolutely right, I just noticed that too. But when I tried, it worked with the model I saved, even if it doesn't show it later. It saved the model, it didn't display it.
I'm sure we will fix that, it is unexpected.
Can you please tell, the successful attempt was this? LLM_MODEL="openai/MaziyarPanahi/WizardLM-2-7B-GGUF"
the model settings here were never displayed properly on the WEB UI
Ah, I know what you mean, you are absolutely right, I just noticed that too. But when I tried, it worked with the model I saved, even if it doesn't show it later. It saved the model, it didn't display it.
I'm sure we will fix that, it is unexpected.
Can you please tell, the successful attempt was this? LLM_MODEL="openai/MaziyarPanahi/WizardLM-2-7B-GGUF"
This is wrong, as @SmartManoj directed me to try, it works properly when used in the parameters of "dcoker run": docker run\ -e LLM_API_KEY \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -e LLM_MODEL="openai/lm-studio" \ -e SANDBOX_TYPE=exec \ -e LLM_BASE_URL="http://192.168.0.93:1234/v1" \ -e LLM_MODEL="openai/lmstudio-community/Meta-Llama-3-8B-Instruct-GGUF" \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal=host-gateway \ ghcr.io/opendevin/opendevin:0.4.0
Thanks for the feedback! Were you running poetry run python opendevin/main.py -d ./workspace -t "write bash script to print 5"
or with the web UI?
He did both.
Yes, I am using WEB UI now.
You can set LLM_MODEL="openai/anything"
Here openai/
is the key as LM studio is exposing only one model at a time.
@rbren still added the label to a solved issue?
Ah if this is solved I'll close :)
How to access models that are not available in Litellm? This is my custom model
How are you running this model? Please check out this for OpenAI compatible model.
Is there an existing issue for the same bug?
Describe the bug
(opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -e LLM_BASE_URL="http://192.168.0.93:1234/v1" -e SANDBOX_TYPE=exec -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal=host-gateway ghcr.io/opendevin/opendevin:0.4.0 INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit) INFO: 172.17.0.1:39496 - "GET /index.html HTTP/1.1" 304 Not Modified INFO: ('172.17.0.1', 39502) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI2YzNhZmY0OC1mZDIwLTRmNjAtYmZhOS0yYmY3OTk3NDJlNDQifQ.dr-5Izu4B2Ziz0plH-KU7DCSNHL2sue7FU-x77iOEJk" [accepted] INFO: connection open Starting loop_recv for sid: 6c3aff48-fd20-4f60-bfa9-2bf799742e44 INFO: 172.17.0.1:39496 - "GET /locales/zh/translation.json HTTP/1.1" 404 Not Found INFO: 172.17.0.1:39496 - "GET /api/refresh-files HTTP/1.1" 200 OK 07:55:26 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 07:55:26 - opendevin:INFO: llm.py:51 - Initializing LLM with model: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf 07:55:27 - opendevin:INFO: exec_box.py:221 - Container stopped 07:55:27 - opendevin:INFO: exec_box.py:239 - Container started INFO: 172.17.0.1:39496 - "GET /api/litellm-models HTTP/1.1" 200 OK INFO: 172.17.0.1:39500 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 172.17.0.1:39496 - "GET /api/agents HTTP/1.1" 200 OK 07:55:32 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM MaziyarPanahi/WizardLM-2-7B-GGUF 07:55:32 - opendevin:INFO: llm.py:51 - Initializing LLM with model: MaziyarPanahi/WizardLM-2-7B-GGUF 07:55:43 - opendevin:INFO: exec_box.py:221 - Container stopped 07:55:43 - opendevin:INFO: exec_box.py:239 - Container started
============== STEP 0
07:55:53 - PLAN Use python to write a snake game 07:55:54 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #1 | You can customize these settings in the configuration. 07:55:55 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #2 | You can customize these settings in the configuration. 07:55:56 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #3 | You can customize these settings in the configuration. 07:55:58 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #4 | You can customize these settings in the configuration. 07:56:05 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #5 | You can customize these settings in the configuration. 07:56:05 - opendevin:ERROR: agent_controller.py:102 - Error in loop Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 662, in completion model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider( ^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5944, in get_llm_provider raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5931, in get_llm_provider raise litellm.exceptions.BadRequestError( # type: ignore litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass incompletion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/app/opendevin/controller/agent_controller.py", line 98, in _run finished = await self.step(i) ^^^^^^^^^^^^^^^^^^ File "/app/opendevin/controller/agent_controller.py", line 211, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/agenthub/monologue_agent/agent.py", line 218, in step resp = self.llm.completion(messages=messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, kw) ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 325, in iter raise retry_exc.reraise() ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 158, in reraise raise self.last_attempt.result() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 382, in call result = fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "/app/opendevin/llm/llm.py", line 78, in wrapper resp = completion_unwrapped(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2977, in wrapper raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2875, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2137, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8665, in exception_type raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8633, in exception_type raise APIConnectionError( litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersCurrent Version
Installation and Configuration
Model and Agent
lm-studio:MaziyarPanahi/WizardLM-2-7B-GGUF
Reproduction Steps
export LLM_API_KEY="lm-studio" export WORKSPACE_BASE=/home/agent/OpenDevin/workspace
docker run \ -e LLM_API_KEY \ -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \ -e LLM_BASE_URL="http://192.168.0.93:1234/v1" \ -v $WORKSPACE_BASE:/opt/workspace_base \ -v /var/run/docker.sock:/var/run/docker.sock \ -p 3000:3000 \ --add-host host.docker.internal=host-gateway \ ghcr.io/opendevin/opendevin:0.4.0
In WEB UI: 1.Set up the model:lm-studio:MaziyarPanahi/WizardLM-2-7B-GGUF(OR MaziyarPanahi/WizardLM-2-7B-GGUF/WizardLM-2-7B.Q6_K.gguf) 2."Use python to write a snake game"
Logs, Errors, Screenshots, and Additional Context
After using 0.4.0, "Error creating controller. Please check Docker is running using docker ps" appears. Reinstallation has no effect. refer to "https://github.com/OpenDevin/OpenDevin/issues/1156#issuecomment-2064549427". Use method "-e SANDBOX_TYPE=exec". But the problem still exists after starting and running. It is worth noting that 0.3.1 started normally in the same way, and there was no problem here.
Windows 10+WSL+Ubuntu-20.04+Docker(win)