Closed JayLZhou closed 6 months ago
Are you on a Mac?
Are you on a Mac?
yes, m2
So am I, and there's a problem that is being fixed just now, we need this branch: https://github.com/OpenDevin/OpenDevin/pull/891
Can you restart docker, then pull that branch to run it?
So am I, and there's a problem that is being fixed just now, we need this branch: #891
Can you restart docker, then pull that branch to run it?
sry, it is still sticking at this step, I don't know why
What is the last commit shown in git log
?
What is the last commit shown in
git log
?
commit 55760ec4ddc669daf4a0b8b36028d2e73c9ab17a Author: Xingyao Wang xingyao6@illinois.edu Date: Mon Apr 8 12:59:18 2024 +0800
feat(sandbox): Support sshd-based stateful docker session (#847)
* support sshd-based stateful docker session
* use .getLogger to avoid same logging message to get printed twice
* update poetry lock for dependency
* fix ruff
* bump docker image version with sshd
* set-up random user password and only allow localhost connection for sandbox
* fix poetry
* move apt install up
commit 6e3b554317de7bc5d96ef81b4097287e05c0c4d0 Author: RaGe foragerr@users.noreply.github.com Date: Sun Apr 7 15:57:31 2024 -0400
Please do git pull
again, and restart docker. It will update with a hotfix, worth trying.
But the full fix is on the branch I linked, and you need to pull it specifically.
No, it is still not working. Actually, I have git pull
from your linked branch, but it is still will be stuck in this step.
Maybe ssh is not running on your machine? Also, if you try to start separately, with
make start-backend
Make start-frontend
We will see what does backend not like. Alternatively, there should be a log file in ./logs.
opendevin-frontend@0.1.0 start vite --port 3001
VITE v5.2.8 ready in 357 ms
➜ Local: http://localhost:3001/
➜ Network: use --host to expose
➜ press h + enter to show help
INFO: ('127.0.0.1', 63890) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJlMzk5MWFkZS0xZWRlLTRlZDctYjFlYS04MjNjMWJkMWQzYjQifQ.XYFIdAi8Vhbw7n0iEEWhzZdue9WIJ4TqsKY68s5DFoc" [accepted]
Starting loop_recv for sid: e3991ade-1ede-4ed7-b1ea-823c1bd1d3b4, False
INFO: connection open
INFO: ('127.0.0.1', 63902) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJiNTU4MDg3Zi00NGFhLTQwOWItOWUyYy04YmI0MmE5NDEwMmMifQ.D4JH7BYER9ttOiKlsswN61kf1wYHz_aHt_WYQgunQ1Y" [accepted]
Starting loop_recv for sid: b558087f-44aa-409b-9e2c-8bb42a94102c, False
INFO: connection open
INFO: ('127.0.0.1', 63904) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJiNTU4MDg3Zi00NGFhLTQwOWItOWUyYy04YmI0MmE5NDEwMmMifQ.D4JH7BYER9ttOiKlsswN61kf1wYHz_aHt_WYQgunQ1Y" [accepted]
Starting loop_recv for sid: b558087f-44aa-409b-9e2c-8bb42a94102c, False
INFO: connection open
INFO: 127.0.0.1:63910 - "GET /messages/total HTTP/1.1" 200 OK
INFO: 127.0.0.1:63906 - "GET /refresh-files HTTP/1.1" 200 OK
INFO: 127.0.0.1:63909 - "GET /configurations HTTP/1.1" 200 OK
INFO: 127.0.0.1:63913 - "GET /refresh-files HTTP/1.1" 200 OK
21:23:39 - opendevin:INFO: sandbox.py:119 - Using workspace directory: /Users/zhouxiaolun/Projects/OpenDevin/workspace
21:23:39 - opendevin:INFO: sandbox.py:320 - Container stopped
Darwin
21:23:39 - opendevin:WARNING: sandbox.py:336 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
21:23:39 - opendevin:INFO: sandbox.py:356 - Container started
21:23:40 - opendevin:INFO: sandbox.py:372 - waiting for container to start: 1, container status: running
21:23:40 - opendevin:INFO: sandbox.py:198 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 2222 opendevin@localhost
with the password '5c04f12b-b7a3-4c9b-9c19-d5ff35b12f8b' and report the issue on GitHub.
FYI, the llm in logs is empty (response and prompt), and the opendevin_xxx.log as shown:
21:25:42 - opendevin:INFO: sandbox.py:119 - Using workspace directory: /Users/zhouxiaolun/Projects/OpenDevin/workspace
21:25:42 - opendevin:INFO: sandbox.py:320 - Container stopped
21:25:42 - opendevin:WARNING: sandbox.py:336 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
21:25:42 - opendevin:INFO: sandbox.py:356 - Container started
21:25:43 - opendevin:INFO: sandbox.py:372 - waiting for container to start: 1, container status: running
21:25:44 - opendevin:INFO: sandbox.py:198 - Connecting to opendevin@localhost via ssh. If you encounter any issues, you can try ssh -v -p 2222 opendevin@localhost
with the password 'b0df15b7-d8dc-4b2d-baaa-eecdc7196354' and report the issue on GitHub.
That... looks good? If there's no error when it tries to ssh, it's good news... it connects when the frontend starts a task. If you attempt to access localhost:3001?
That... looks good? If there's no error when it tries to ssh, it's good news... it connects when the frontend starts a task. If you attempt to access localhost:3001?
But it. still not move to the next step, i mean i am still stop at the first step , not new plan output, not new response...
[plugin:vite:import-analysis] Failed to resolve import "../i18n/declaration" from "src/components/Workspace.tsx". Does the file exist? C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/ChatInterface.tsx:8:24C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/SettingModal.tsx:27:24C:/Users/xprat/PycharmProjects/devin ai/OpenDevin/frontend/src/components/Workspace.tsx:7:24 22 | import Earth from "../assets/earth"; 23 | import Pencil from "../assets/pencil"; 24 | import { I18nKey } from "../i18n/declaration"; | ^ 25 | import { AllTabs, TabOption } from "../types/TabOption"; 26 | import Browser from "./Browser"; at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 10) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29 at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 8) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29) at async viteTransformMiddleware (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:63775:32 at formatError (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50863:46) at TransformContext.error (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:50857:19) at normalizeUrl (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66092:33) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66247:47 at async Promise.all (index 9) at async TransformContext.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:66168:13) at async Object.transform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:51172:30) at async loadAndTransform (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:53923:29) at async viteTransformMiddleware (file:///C:/Users/xprat/PycharmProjects/devin%20ai/OpenDevin/frontend/node_modules/vite/dist/node/chunks/dep-whKeNLxG.js:63775:32 Click outside, press Esc key, or fix the code to dismiss. You can also disable this overlay by setting server.hmr.overlay to false in vite.config.ts.
Still exist
@hongdongyue2012 at that stage, I would make sure to wipe the container and even the image in docker, then run make build
: it will redownload and rebuild.
@JayLZhou ssh daemon is started? I would refresh the page and try to send a task, to see if the container gets more activity or errors. But maybe the best is the same: try to clear image and redo, just to be sure: there have been successive updates today, both to the image and code, until it worked on Mac. Worth also, like in the screenshot above: when you see the message about the password to test with, to do that.
: try to clear image and redo, just to be sure: there have been successive updates today, both to the i
DO you mean, we need to repull and rebuild our repo?
Yes, after clearing the docker image
Yes, after clearing the docker image
Let me try it again
@enyst @SmartManoj Actually, I have (1) cleared our image and rebuilt the repo and (2) separately run the backend and frontend. But it still does not work.....
have same question here
Run this to check LLM response time.
import warnings
warnings.filterwarnings("ignore")
import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))
messages = [{ "content": "What is the meaning of life?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'],
api_key=config['LLM_API_KEY'],
base_url=config.get('LLM_BASE_URL'),
messages=messages)
print(response.choices[0].message.content)
dt2 = datetime.now()
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")
Run this to check LLM response time.
import warnings warnings.filterwarnings("ignore") import tomllib as toml from litellm import completion file_path=r'config.toml' config = toml.load(open(file_path,'rb')) messages = [{ "content": "What is the meaning of life?","role": "user"}] response = completion(model=config['LLM_MODEL'], api_key=config['LLM_API_KEY'], base_url=config.get('LLM_BASE_URL'), messages=messages) print(response.choices[0].message.content)
I think our api-key is good, since I can use my api-key in METAGPT
运行此命令来检查 LLM 响应时间。
import warnings warnings.filterwarnings("ignore") import tomllib as toml from litellm import completion from datetime import datetime file_path=r'config.toml' config = toml.load(open(file_path,'rb')) messages = [{ "content": "What is the meaning of life?","role": "user"}] dt = datetime.now() response = completion(model=config['LLM_MODEL'], api_key=config['LLM_API_KEY'], base_url=config.get('LLM_BASE_URL'), messages=messages) print(response.choices[0].message.content) dt2 = datetime.now() print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")
I have the same question as also stuck in the step 0 in #908 , and this script can run successfully:
The meaning of life is a deep and complex philosophical question that has been debated for centuries. Different people and cultures have different beliefs about the purpose and meaning of life. Some may find meaning in relationships, personal achievements, or spiritual fulfillment, while others may find meaning in contributing to the well-being of others or pursuing knowledge and understanding. Ultimately, the meaning of life is a deeply personal and subjective concept that each individual must explore and define for themselves.
Time taken: 2.9s
Add the following code to opendevin\llm\llm.py
and run
if __name__ == '__main__':
llm = LLM()
messages = [{"content": "42?", "role": "user"}]
response = llm.completion(messages=messages)
print('\n' * 4 + '--' * 20)
print(response['choices'][0]['message']['content'])
@JayLZhou with make start-backend
separately, after you connect the frontend and enter a question, what are the errors?
@SmartManoj why do you suspect that it's the response time or key, did you experience something related to those?
@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.
Add the following code to
opendevin\llm\llm.py
and runif __name__ == '__main__': llm = LLM() messages = [{"content": "42?", "role": "user"}] response = llm.completion(messages=messages) print('\n' * 4 + '--' * 20) print(response['choices'][0]['message']['content'])
Wrongly commented here instead of in #908 for @DEM1TASSE
@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.
Was the browser window used before, in this example? Or... the browser, simply, was it used for multiple messages? There is a history saving feature recently, which attempts to restore sessions if it has them. It ends up taking a lot of time, because I think it's adding embeddings to the local vector store...
If you experience that yourself, can you make sure to clear the browser local storage, and close all tabs used with the frontend.
@enyst In some low-end devices with 8GB RAM, even to generate "Hello", it took around ~3 mins for a ~6GB model.
When testing the LLM manually.
So, I thought the user might stop the program after certain minutes by thinking it is stuck.
That makes sense. That makes me curious about the difference in duration between running llm.py for a completion call, as you suggested in issues, and the first completion call with opendevin when it starts a task... if we can do that, off-hand I wonder there may be some interference from the container. I mean, we can log the completion call, but the long starting time that you mention includes more. And I'd like to exclude the container, and include the rest. Not at my machine atm but it will be an interesting data point. 😅
My guess is that it does embeddings for history, meaning probably useless (?) work. And potentially a lot of work, depending on how the user used the browser with history.
@JayLZhou add some debug statements after 125 https://github.com/OpenDevin/OpenDevin/blob/707ab7b3f84fb5664ff63da0b52e7b0d2e4df545/opendevin/controller/agent_controller.py#L122-L125
I can open the webpage at http://localhost:3001/
But, I will stucking at this step, and can't output anything else.
我也一样,你解决了吗
@Mason-zy Which LLM Model?
Check Gemini 1.5 Pro
Check this troubleshooting https://github.com/OpenDevin/OpenDevin/issues/995#issuecomment-2048465058
哪种LLM模式?
查看 Gemini 1.5 Pro https://developers.googleblog.com/2024/04/gemini-15-pro-in-public-preview-with-new-features.html
GPT-4 Then run the program that checks the LLM response time without any output。My background startup now has two 404, not before, I don't know if it has any impact。
Check this troubleshooting #995 (comment)
@Mason-zy
Check this troubleshooting #995 (comment)
@Mason-zy I also tried, but he is still stuck in the console to prevent network issues, I am now trying to deploy local llm
Run this poetry run python opendevin/main.py -d ./workspace/ -t "write a bash script that prints hi"
and after stuck press Ctrl + C to stop it and share the screenshot
Should be fixed with the new docker installation method!
I can open the webpage at http://localhost:3001/
But, I will stucking at this step, and can't output anything else.