All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
36.75k stars 4.18k forks source link

Document LMStudio usage #419

Closed Niche-Apps closed 4 months ago

Niche-Apps commented 7 months ago

Describe the bug Trouble connecting to LMStudio

Steps to Reproduce 1.Start server on LMStudio 2.Start frontend and backend on OpenDevin 3.

Expected behavior OpenDevin asks what I want it to build

Actual behavior

Additional context

OpenDevin does nothing and I get this error in LMStudio:

[2024-03-30 19:21:30.994] [ERROR] Unexpected endpoint or method. (GET /litellm-models). Returning 200 anyway

enyst commented 7 months ago

/litellm-models is not how you call the models, you need to set the URL as LM Studio exposes it. Check its documentation on how to call the model you want to use.

I'm not familiar with LM Studio, FWIW some of its setup has been discussed in this issue. The last few comments seem to point to a solution.

hchris1 commented 7 months ago

@Niche-Apps This is only somewhat related but FYI: I am running Mistral 7B locally using jan.ai and this is what I use in my config.toml. Notice the openai/ prefix. It is required as per the LiteLLM docs for OpenAI compatible endpoints.

LLM_BASE_URL="http://localhost:1337/v1"
LLM_API_KEY="EMPTY"
LLM_MODEL="openai/mistral-ins-7b-q4"
rbren commented 7 months ago

@Niche-Apps something is misconfigured. The frontend is reaching out to your LMStudio server on port 3000, and not reaching the backend (which is expected to be running on 3000.

Did the backend start successfully on 3000?

ajeema commented 7 months ago

I'd like to understand how to use LMStudio as well.

mikeaper323 commented 7 months ago

try the following setting for LM Studio:

LLM_API_KEY="lm-studio" LLM_MODEL="openai/mistral" //leave openai as is... you can change mistral to the local model you use LLM_BASE_URL="http://localhost:1234/v1" LLM_EMBEDDING_MODEL="local"

stratte89 commented 7 months ago

I got the same problem, with ollama and lmstudio

for lm studio i tried LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/TheBloke/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

for ollama

LLM_BASE_URL="localhost:11434" LLM_MODEL= "openai/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

logs.txt

does anyone know a solution please?

mikeaper323 commented 7 months ago

Try this setting for LM studio:

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

stratte89 commented 7 months ago

hey, many thanks for your quick response, i just tried it and got this error. does it matter what model i choose in Devin because there is no dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf

Oops. Something went wrong: python3.11/site-packages/openai/_base_client.py", line 960, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

[2024-03-31 23:53:27.433] [INFO] [LM STUDIO SERVER] Stopping server.. [2024-03-31 23:53:27.445] [INFO] [LM STUDIO SERVER] Server stopped [2024-03-31 23:53:29.041] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-03-31 23:53:29.041] [INFO] [LM STUDIO SERVER] Heads up: you've enabled CORS. Make sure you understand the implications [2024-03-31 23:53:29.072] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-03-31 23:53:29.072] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-03-31 23:53:29.073] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-03-31 23:53:29.074] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-03-31 23:53:29.075] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-03-31 23:53:29.075] [INFO] [LM STUDIO SERVER] Model loaded: TheBloke/dolphin-2.5-mixtral-8x7b-GGUF/dolphin-2.5-mixtral-8x7b.Q2_K.gguf [2024-03-31 23:53:29.076] [INFO] [LM STUDIO SERVER] Logs are saved into C:\tmp\lmstudio-server-log.txt

mikeaper323 commented 7 months ago

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace

stratte89 commented 7 months ago

unfortunately still the same error :c

jay-c88 commented 7 months ago

@stratte89 Are you using wsl in Windows, and running LMStudio in Windows? If yes: https://github.com/OpenDevin/OpenDevin/issues/435#issuecomment-2028907533

stratte89 commented 7 months ago

@stratte89 Are you using wsl in Windows, and running LMStudio in Windows? If yes: #435 (comment)

Awesome! Many thanks, I will try it 👍

mikeaper323 commented 7 months ago

Yes. WSL with windows, LM studios with windows. Conda powershell env. Follow all the project instructions.

stratte89 commented 7 months ago

Yes. WSL with windows, LM studios with windows. Conda powershell env. Follow all the project instructions.

oh, i set devin up in the wsl ubuntu in windows with no conda powershell env. I guess i have to reinstall everything then?

jay-c88 commented 7 months ago

You don't 'need' a conda environment (except that you are just littering your wsl environment and maybe cause dependency issues for other projects ^^). If you installed it in your base wsl environment, it should still run fine.

stratte89 commented 7 months ago

You don't 'need' a conda environment (except that you are just littering your wsl environment and maybe cause dependency issues for other projects ^^). If you installed it in your base wsl environment, it should still run fine.

I reated a new file like this and restarted the computer, then i reinstalled devin in an conda env. "Open wsl config file C:\Users\%username%.wslconfig (create one if it doesnt exist), and add this:

[wsl2] networkingMode=mirrored"

I use LLM_API_KEY="lmstudio" LLM_BASE_URL="http://localhost:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

but i still get this APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

the only thing i didnt do was "Then restart wsl completely (exit docker and run wsl --shutdown), then restart everything." because i didn't know how thats why i restarted the pc and reinstalled devin.

mikeaper323 commented 7 months ago

Make sure the api key is: LLM_API_KEY="lm-studio" Not LLM_API_KEY="lmstudio" And make sure you haven't changed the port in lm studio to another port. The only other thing I can think of is the another model.

mikeaper323 commented 7 months ago

Oh and maybe run prompt with administrator privileges, but I don't think that would matter

jay-c88 commented 7 months ago

but i still get this APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

Seems that your opendevin is still not able to find/connect to your LMStudio server.

I reated a new file like this and restarted the computer, then i reinstalled devin in an conda env. "Open wsl config file C:\Users%username%.wslconfig (create one if it doesnt exist), and add this:

make sure the .wslconfig file is actually in your windows userprofile folder, it could be a different location for you. type %UserProfile% in explorer address bar to confirm the file is inside.

stratte89 commented 7 months ago

Oh and maybe run prompt with administrator privileges, but I don't think that would matter

i changed the config.toml and used admin rights already but still, i tried both of them

uvicorn opendevin.server.listen:app --port 3000 npm start

or uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start -- --host

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

image

btw. gpt (open ai api key) is working fine

jay-c88 commented 7 months ago

Yeah there is absolutely nothing arriving at LMStudio if you look at lmstudio-server-log.txt.

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

The network mirror configuration has to be the global config file %UserProfile%\.wslconfig for WSL to be able to access the host's localhost.

stratte89 commented 7 months ago

Yeah there is absolutely nothing arriving at LMStudio if you look at lmstudio-server-log.txt.

both the same thing. do i need to change a different wslconfig for the ubuntu terminal?

The network mirror configuration has to be the global config file %UserProfile%\.wslconfig for WSL to be able to access the host's localhost.

well i did that, i created a new file .wslconfig and pasted the code inside

image

Update, i installed ubuntu on virtual box and installed devin, i set it up as network bridge. now i get a connection, but still an error

(base) stratte@stratte-VirtualBox:~/Desktop$ curl http://192.168.178.20:1234 {"error":"Unexpected endpoint or method. (GET /)"}(base) stratte@stratte-VirtualBox:~/Desktop$ telnet 192.168.178.20 1234 telnet 192.168.178.20 1234 Trying 192.168.178.20... Connected to 192.168.178.20. Escape character is '^]'. HTTP/1.1 408 Request Timeout Connection: close Connection closed by foreign host.

Oops. Something went wrong: OpenAIException - Error code: 400 - {'error': ' Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

File "/home/stratte/.local/share/virtualenvs/OpenDevin-main-2ejNtS9k/lib/python3.11/site-packages/litellm/llms/openai.py", line 382, in completion

raise OpenAIError(status_code=e.status_code, message=str(e))

litellm.llms.openai.OpenAIError: Error code: 400 - {'error': ' Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

when i use

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234" #without /v1 LLM_MODEL="openai/dolphin-2.5-mixtral-8x7b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

then i get this error

raise Exception(f"Invalid response object {traceback.format_exc()}")

Exception: Invalid response object Traceback (most recent call last):

  File "/home/stratte/.local/share/virtualenvs/OpenDevin-main-2ejNtS9k/lib/python3.11/site-packages/litellm/utils.py", line 6585, in convert_to_model_response_object

    for idx, choice in enumerate(response_object["choices"]):

                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

TypeError: 'NoneType' object is not iterable

LM Studio showed this error

Unexpected endpoint or method. (POST /chat/completions). Returning 200 anyway
mikeaper323 commented 7 months ago

I'm sorry your still having issues getting this LM studio to connect. I'll try to be as specific as possible, that worked for me. This is what I did, you don't have to do exactly what I did, i'm guessing there are other ways around it:

This is for windows:

step 1:

download anaconda latest version and install it with default setting. (I added conda to my windows enviroment path, and i choose to use it for my default Python, however you shouldn't do this if conda won't be your go to prompt or powershell).

step 2:

download docker latest version and install it with default settings, I have docker autostart when windows starts up, you don't have to do this, but if you don't make sure you manually start docker.

step 3:

download latest node.js and install the latest version.

step 4:

Open windows PowerShell as admin and run command: wsl --install restart computer Open windows PowerShell as admin and run command: wsl -l -v make sure WSL 2 is being used. if WSL 2 is not being used run command in the same PowerShell: wsl --set-default-version 2 restart computer and try command again: wsl -l -v if for some reason at this point you are unable to run WLS download it from the microsoft store, like Ubuntu (thats the one I use). at this point you should have a linux with WSL on windows.

step 5:

open anaconda powershell with admin (you can find this with the search icon in windows) run command: conda create -n devin python=3.11 (this would create a conda environment called devin, with python 3.11) run command: conda activate devin (this will activate the devin environment) next cd into your desired directory you wish to install devin run command: git clone https://github.com/OpenDevin/OpenDevin.git (I'm sure you already have this, but make sure its the latest version) next run command: docker pull ghcr.io/opendevin/sandbox (this would install the opendevin image for docker) cd into opendevin folder and from this point follow the open devin instructions:

Then copy config.toml.template to config.toml. Add an OpenAI API key to config.toml, or see below for how to use different models.

LLM_API_KEY="sk-..." Next, start the backend:

python -m pip install pipenv python -m pipenv install -v python -m pipenv shell

step 6: deploy your desired model from LM Studio.

step 7: edit config.toml as follows:

LLM_API_KEY="insert claude api key here if you like to use claude and uncomment"

LLM_MODEL="claude-3-haiku-20240307"

LLM_API_KEY="insert openAI key here if you like to use openAI and uncomment"

LLM_MODEL="gpt-3.5-turbo"

this section is for LM Studio only, its already uncommented:

LLM_API_KEY="lm-studio" LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" #you can change this to any model you like, just keep the openai/ LLM_BASE_URL="http://localhost:1234/v1" LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local" WORKSPACE_DIR="./workspace"

step 8: go back to the conda powershell with both enviroments active and run: uvicorn opendevin.server.listen:app --port 3000

step 9: continue opendevin instructions, no need to have environments active in this section, since this uses node.js: In a second terminal, start the frontend:

cd frontend npm install npm start

............................................................

by following those steps, it should work, if at this point it doesn't, I wouldn't know how to help you. Maybe fresh install everything. Keep in mind that openDevin is still a new project, so most of this local models don't work well. GOOD LUCK

stratte89 commented 7 months ago

thank you for all the details, i will reinstall it using the anaconda terminal, i was using the ubuntu terminal, i don't know if that makes a big difference tho. I did all the steps you said tho. And since its working with a open ai api key i doubt that its the installation, its more about the connection between ls-studio and devin.

I installed Ubuntu in an virtual machine using a network bridge and my localip:1234/v1 and in the terminal i get a connection between the vm and the lm studio api. i use uvicorn opendevin.server.listen:app --port 3000 --host 0.0.0.0 npm run start -- --host

(base) stratte@stratte-VirtualBox:~/Desktop$ curl -v http://192.168.178.20:1234

EDIT: I fixed it, well we did! For everybody who faces a simmular problem, try this config, it works for me now. In my case I am using Windows 10 + Oracle Virtual Box Ubuntu, Devin runs in Ubuntu and LM-Studio on Windows.

LLM_API_KEY="na" LLM_BASE_URL="actual local ip of your host pc:1234/v1" #check ipconfig in a cmd LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

EDIT 2: I managed to make it work on windows as well now by using this

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" #local ip LLM_MODEL="openai/deepseek-coder-6.7B-instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

EDIT: Nevermind... i mean i am connected to LM-Studio somehow but now i get this error

litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'error': ' Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

    OBSERVATION:
    OpenAIException - Error code: 400 - {'error': '<LM Studio error> Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a'}

LM-Studio 024-04-01 11:31:18.999] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: ' You're a thoughtful robot. Your main task is to testapp. Don't expand the scope of your task--just ... (truncated in these logs)' } (total messages = 1) [2024-04-01 11:31:19.019] [ERROR] Unknown exception during inferencing.. Error Data: n/a, Additional Data: n/a [2024-04-01 11:31:19.110] [INFO] [LM STUDIO SERVER] Processing queued request... [2024-04-01 11:31:19.111] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [

...and only with JSON.\n\n\n", "role": "user" } ], "model": "Deepseek-Coder-6.7B-Instruct-GGUF" } [2024-04-01 11:31:19.118] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: ' You're a thoughtful robot. Your main task is to testapp. Don't expand the scope of your task--just ... (truncated in these logs)' } (total messages = 1)

LLM_API_KEY="lm-studio" LLM_BASE_URL="http://192.168.178.20:1234/v1" LLM_MODEL="openai/Deepseek-Coder-6.7B-Instruct-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

Niche-Apps commented 7 months ago

I just discovered my filter stopped all the replies. I do seem to have a problem with the backend. I’m on Mac and got this:

opendevin % uvicorn opendevin.server.listen:app --port 3000 Traceback (most recent call last): File "/Users/josephsee/anaconda3/bin/uvicorn", line 8, in sys.exit(main()) ^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) ^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/click/core.py", line 783, in invoke return __callback(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/main.py", line 418, in main run( File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/main.py", line 587, in run server.run() File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/server.py", line 62, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/server.py", line 69, in serve config.load() File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/config.py", line 458, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/importer.py", line 24, in import_from_string raise exc from None File "/Users/josephsee/anaconda3/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/josephsee/anaconda3/lib/python3.11/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "/Users/josephsee/OpenDevin/opendevin/server/listen.py", line 1, in from opendevin.server.session import Session File "/Users/josephsee/OpenDevin/opendevin/server/session.py", line 14, in from opendevin.controller import AgentController File "/Users/josephsee/OpenDevin/opendevin/controller/init.py", line 1, in from .agent_controller import AgentController File "/Users/josephsee/OpenDevin/opendevin/controller/agent_controller.py", line 22, in from .command_manager import CommandManager File "/Users/josephsee/OpenDevin/opendevin/controller/command_manager.py", line 4, in from opendevin.sandbox.sandbox import DockerInteractive File "/Users/josephsee/OpenDevin/opendevin/sandbox/sandbox.py", line 10, in import docker ModuleNotFoundError: No module named 'docker'

On Mar 31, 2024, at 10:14 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps something is misconfigured. The frontend is reaching out to your LMStudio server on port 3000, and not reaching the backend (which is expected to be running on 3000.

Did the backend start successfully on 3000?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2028794138, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4PXKIIO3FIQ65U7UTY3AR5XAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRYG44TIMJTHA. You are receiving this because you were mentioned.

rbren commented 7 months ago

@Niche-Apps looks like you need to redo the pipenv setup--that should install docker

Niche-Apps commented 7 months ago

Now I get this in the backend. Is there a setting I need to change in the config file?

Retrying llama_index.embeddings.openai.base.get_embeddings in 0.8089781787046612 seconds as it raised APIConnectionError: Connection error..

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps commented 7 months ago

Here is what I have but I thought the critical part was the llm_base_url

LLM_BASE_URL="https://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

custom_llm_provider="openai"

WORKSPACE_DIR="./workspace"

LLM_MODEL="openai-GPT-4"

LLM_API_KEY="your-api-key"

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps commented 7 months ago

Ok it doesn’t like the llm provider setting. I got about 20 of this.

Oops. Something went wrong: Error condensing thoughts: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openai-GPT-4 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

Niche-Apps commented 7 months ago

I changed my settings to this and still got the same message.

LLM_BASE_URL="https://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

custom_llm_provider="completion(model='bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf',)"

WORKSPACE_DIR="./workspace"

LLM_MODEL="bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf"

LLM_API_KEY="your-api-key"

On Apr 1, 2024, at 9:32 AM, Robert Brennan @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps looks like you need to redo the pipenv setup--that should install docker

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2029851405, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4E2ZDXEZFDVQ5G7NTY3FVZXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMRZHA2TCNBQGU. You are receiving this because you were mentioned.

enyst commented 7 months ago

I changed my settings to this and still got the same message. # LLM_BASE_URL="https://localhost:1234/v1" LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local" custom_llm_provider="completion(model='bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf',)" WORKSPACE_DIR="./workspace" LLM_MODEL="bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" > LLM_API_KEY="your-api-key"

@Niche-Apps according to comments above, it seems that for LM Studio it may work using LiteLLM openai compatible instructions here: https://docs.litellm.ai/docs/providers/openai_compatible

That would mean: LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" LLM_API_KEY="lm-studio"

Can you try to replace those two? I don't know about custom_llm_provider, where is that setting from? It's not from or for OpenDevin.

Niche-Apps commented 7 months ago

So now I have this.

LLM_BASE_URL="https://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

WORKSPACE_DIR="./workspace"

LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf"

LLM_API_KEY="lm-studio”

And I get this.

Oops. Something went wrong: Error condensing thoughts: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=openai-GPT-4 Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Should I have left the embedding model as openai?

On Apr 1, 2024, at 2:54 PM, Engel Nyst @.***> wrote:

LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" LLM_API_KEY="lm-studio"

Niche-Apps commented 7 months ago

Ok apparently it has to do with the option in the opendevin workspace. What do I choose? There are so many options but none of them quite match.

On Apr 1, 2024, at 2:54 PM, Engel Nyst @.***> wrote:

I changed my settings to this and still got the same message. # LLM_BASE_URL="https://localhost:1234/v1 https://localhost:1234/v1" LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local" custom_llm_provider="completion(model='bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf',)" WORKSPACE_DIR="./workspace" LLM_MODEL="bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" > LLM_API_KEY="your-api-key"

@Niche-Apps https://github.com/Niche-Apps according to comments above, it seems that for LM Studio it may work using LiteLLM openai compatible instructions here: https://docs.litellm.ai/docs/providers/openai_compatible https://docs.litellm.ai/docs/providers/openai_compatible That would mean: LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" LLM_API_KEY="lm-studio"

Can you try to replace those two? I don't know about custom_llm_provider, where is that setting from? It's not from or for OpenDevin.

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2030453104, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4WSQL76ZM7HFOSP43Y3G3OFAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGQ2TGMJQGQ. You are receiving this because you were mentioned.

enyst commented 7 months ago

Should I have left the embedding model as openai?

I don't think so, but I'm not sure. According to posts from folks who got it working with LM Studio and local model, no, it was local.

Wait, this looks wrong: LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf". The second part, after /, is: "Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" - are you sure that is the name of the model you're trying to use?

stratte89 commented 7 months ago

I installed Ubuntu now on a second drive and spent like a day installing everything, now devin is finally installed and i was hoping that the problem is fixed but its still there :/

(OpenDevin-main) (base) stratte@stratte-MS-7D08:~/Schreibtisch/AI/OpenDevin-main$ curl http://localhost:1234/v1 {"error":"Unexpected endpoint or method. (GET /v1)"}

LLM_API_KEY="lm-studio" LLM_BASE_URL="localhost:1234/v1" LLM_MODEL="openai/stable-code-instruct-3b-GGUF" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

[2024-04-02 01:16:16.373] [INFO] [LM STUDIO SERVER] Verbose server logs are ENABLED [2024-04-02 01:16:16.375] [INFO] [LM STUDIO SERVER] Success! HTTP server listening on port 1234 [2024-04-02 01:16:16.375] [INFO] [LM STUDIO SERVER] Supported endpoints: [2024-04-02 01:16:16.375] [INFO] [LM STUDIO SERVER] -> GET http://localhost:1234/v1/models [2024-04-02 01:16:16.375] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/chat/completions [2024-04-02 01:16:16.375] [INFO] [LM STUDIO SERVER] -> POST http://localhost:1234/v1/completions [2024-04-02 01:16:16.376] [INFO] [LM STUDIO SERVER] Model loaded: bartowski/stable-code-instruct-3b-GGUF/stable-code-instruct-3b-Q8_0.gguf [2024-04-02 01:16:16.376] [INFO] [LM STUDIO SERVER] Logs are saved into /tmp/lmstudio-server-log.txt

File "/home/stratte/.local/share/virtualenvs/OpenDevin-main--c0JPQEb/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

what the heck am i doing wrong here :/

This is the output with Ollama

Same I try to get it to work as well with lm-studio or ollama, both arent working, im on ubuntu

for ollama LLM_BASE_URL="127.0.0.1:11434" LLM_MODEL="ollama/mistral" LLM_EMBEDDING_MODEL="local" WORKSPACE_DIR="./workspace"

i run ollama serve in one terminal and ollama run mistral in another, also the server uvicorn opendevin.server.listen:app --port 3000 and npm start. Open AI Api works fine.

Oops. Something went wrong: Error condensing thoughts: No healthy deployment available, passed model=ollama/mistral

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/stratte/Schreibtisch/AI/OpenDevin-main/opendevin/controller/agent_controller.py", line 89, in step action = self.agent.step(self.state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/stratte/Schreibtisch/AI/OpenDevin-main/agenthub/monologue_agent/agent.py", line 151, in step self._add_event(prev_action.to_dict()) File "/home/stratte/Schreibtisch/AI/OpenDevin-main/agenthub/monologue_agent/agent.py", line 94, in _add_event self.monologue.condense(self.llm) File "/home/stratte/Schreibtisch/AI/OpenDevin-main/agenthub/monologue_agent/utils/monologue.py", line 36, in condense raise RuntimeError(f"Error condensing thoughts: {e}") RuntimeError: Error condensing thoughts: No healthy deployment available, passed model=ollama/mistral

OBSERVATION: Error condensing thoughts: No healthy deployment available, passed model=ollama/mistral

Exited before finishing

Niche-Apps commented 7 months ago

Ok I get something different now, but it just keeps repeating.

File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 897, in request return self._request( ^^^^^^^^^^^^^^ File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 950, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( ^^^^^^^^^^^^^^ File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 950, in _request return self._retry_request( ^^^^^^^^^^^^^^^^^^^^ File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( ^^^^^^^^^^^^^^ File ".local/share/virtualenvs/OpenDevin-N_ODd4da/lib/python3.11/site-packages/openai/_base_client.py", line 960, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error.

On Apr 1, 2024, at 5:48 PM, Engel Nyst @.***> wrote:

Should I have left the embedding model as openai?

I don't think so, but I'm not sure. According to posts from folks who got it working with LM Studio and local model, no, it was local.

Wait, this looks wrong: LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf". The second part, after /, is: "Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" - are you sure that is the name of the model you're trying to use?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2030708122, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S3HWLT7UNFXIXVVMWDY3HP4ZAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQG4YDQMJSGI. You are receiving this because you were mentioned.

enyst commented 7 months ago

@Niche-Apps what did you change? What is the config you're using now?

Niche-Apps commented 7 months ago

I’ve got this and I restarted the backend. I don’t know if the provider does anything but I found that on litellm docs under custom api server.

LLM_BASE_URL="http://localhost:1234/v1"

LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

WORKSPACE_DIR="./workspace"

LLM_MODEL="openai/bartowski/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf"

LLM_API_KEY="lm-studio"

CUSTOM_LLM_PROVIDER="openai"

On Apr 1, 2024, at 8:29 PM, Engel Nyst @.***> wrote:

@Niche-Apps https://github.com/Niche-Apps what did you change? What is the config you're using now?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2030912194, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S5TMDIT5TEIC4YVOYTY3ICV3AVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQHEYTEMJZGQ. You are receiving this because you were mentioned.

enyst commented 7 months ago

@Niche-Apps what is the exact name of that model you're trying to run? can you please give a link to it on hugging face or where it is? Or a screenshot with its name as you see it in LM Studio itself, not in opendevin?

Niche-Apps commented 7 months ago

On Apr 1, 2024, at 2:54 PM, Engel Nyst @.***> wrote:

LLM_MODEL="openai/Starling-LM-7B-beta-GGUF/Starling-LM-7B-beta-Q6_K.gguf" LLM_API_KEY="lm-studio"

enyst commented 7 months ago

@Niche-Apps That's not what I'm asking. I don't think the double "Starling...etc" of the name looks right. I'm asking where did you find that model in the first place, a link to it or a screenshot of it.

stratte89 commented 7 months ago

@Niche-Apps That's not what I'm asking. I don't think the double "Starling...etc" of the name looks right. I'm asking where did you find that model in the first place, a link to it or a screenshot of it.

i would say that it is this one https://huggingface.co/LoneStriker/Starling-LM-7B-beta-GGUF

enyst commented 7 months ago

Thank you @stratte89. Once we figure out how we can do this, we can document them for everyone.

@Niche-Apps can you try LLM_MODEL="openai/Starling-LM-7B-beta-GGUF" with your last settings?

Niche-Apps commented 7 months ago

Ok it’s doing something. I’ll see how far it gets.

On Apr 1, 2024, at 11:51 PM, Engel Nyst @.***> wrote:

LLM_MODEL="openai/Starling-LM-7B-beta-GGUF"

Niche-Apps commented 7 months ago

STEP 0

PLAN:

🔵 0 write the game snake in python

huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either:

On Apr 1, 2024, at 11:51 PM, Engel Nyst @.***> wrote:

LLM_MODEL="openai/Starling-LM-7B-beta-GGUF"

Niche-Apps commented 7 months ago

Ok that was a problem with orjson on Mac which I reinstalled.

On Apr 1, 2024, at 11:51 PM, Engel Nyst @.***> wrote:

Thank you @stratte89 https://github.com/stratte89. Once we figure out how we can do this, we can document them for everyone.

@Niche-Apps https://github.com/Niche-Apps can you try LLM_MODEL="openai/Starling-LM-7B-beta-GGUF" with your last settings?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2031079268, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4Z7HFORBDVMRB7BC3Y3I2LXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZRGA3TSMRWHA. You are receiving this because you were mentioned.

Niche-Apps commented 7 months ago

Now it produces a lot of thoughts in the server log but nothing happens in the OpenDevin workspace.

On Apr 1, 2024, at 11:51 PM, Engel Nyst @.***> wrote:

Thank you @stratte89 https://github.com/stratte89. Once we figure out how we can do this, we can document them for everyone.

@Niche-Apps https://github.com/Niche-Apps can you try LLM_MODEL="openai/Starling-LM-7B-beta-GGUF" with your last settings?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2031079268, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4Z7HFORBDVMRB7BC3Y3I2LXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZRGA3TSMRWHA. You are receiving this because you were mentioned.

Niche-Apps commented 7 months ago

I keep getting this. I let it run for over 70 steps and nothing happened in the workspace.

Error sending data to client Cannot call "send" once a close message has been sent.

Or this.

OBSERVATION: Recalling memories... Error sending data to client Cannot call "send" once a close message has been sent.

On Apr 1, 2024, at 11:51 PM, Engel Nyst @.***> wrote:

Thank you @stratte89 https://github.com/stratte89. Once we figure out how we can do this, we can document them for everyone.

@Niche-Apps https://github.com/Niche-Apps can you try LLM_MODEL="openai/Starling-LM-7B-beta-GGUF" with your last settings?

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2031079268, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S4Z7HFORBDVMRB7BC3Y3I2LXAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZRGA3TSMRWHA. You are receiving this because you were mentioned.

enyst commented 7 months ago

I keep getting this. I let it run for over 70 steps and nothing happened in the workspace. Error sending data to client Cannot call "send" once a close message has been sent. Or this.

OBSERVATION: Recalling memories... Error sending data to client Cannot call "send" once a close message has been sent.

About this error: it sounds like the UI lost connection to the backend. As of now, that means you need to restart it.

Niche-Apps commented 7 months ago

How soon into a project should I see activity in the terminal or code editor?

On Apr 2, 2024, at 8:10 PM, Engel Nyst @.***> wrote:

I keep getting this. I let it run for over 70 steps and nothing happened in the workspace. Error sending data to client Cannot call "send" once a close message has been sent. Or this.

OBSERVATION: Recalling memories... Error sending data to client Cannot call "send" once a close message has been sent.

About this error: it sounds like the UI lost connection to the backend. As of now, that means you need to restart it.

— Reply to this email directly, view it on GitHub https://github.com/OpenDevin/OpenDevin/issues/419#issuecomment-2033363671, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCMN6S6SL3KCDVXW6PIIEQDY3NJKFAVCNFSM6AAAAABFP6HLB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZTGM3DGNRXGE. You are receiving this because you were mentioned.