TransformerOptimus / SuperAGI

<⚡️> SuperAGI - A dev-first open source autonomous AI agent framework. Enabling developers to build, manage & run useful autonomous agents quickly and reliably.
https://superagi.com/
MIT License
15.25k stars 1.83k forks source link

Agent doesn't get past "Thinking..." and constant GET requests in backend #165

Closed aodrasa closed 11 months ago

aodrasa commented 1 year ago

Hoping someone can help with this. Have tried with new agents, new goals and even after letting it go for what's been about 15 minutes now, I constantly get the same result.

All variables in the config file are correct as I've successfully ran similar processes with Auto-GPT and BabyAGI in the past.

Screenshot 2023-06-03 at 5 21 55 pm Screenshot 2023-06-03 at 5 21 01 pm
mberman84 commented 1 year ago

me too...was working yesterday and now it's not at all. just hanging on thinking

Fluder-Paradyne commented 1 year ago

can u share the goals and tools used so that we can replicate this behaviour

luciferlinx101 commented 1 year ago

Yeah sharing goals and tools would help us replicate and find the issue

TsundokuJim commented 1 year ago

I'm having the same issue. My goal was asking it to create a presentation on how generative AI can improve efficiency and ROI for companies in a particular industry. The tools were the basic defaults (Google search, Read file, and Write file.

I noticed that IP 172.18.0.1 was being hammered, so I did a lookup on it and saw it is identified as a BOGON network. So I turned off blocking of BOGONs and private networks in my firewall. Unfortunately, that didn't solve the problem.

schobele commented 1 year ago

Same issue. Noticed this in the logs:

"The index exceeds the project quota of 1 pods by 1 pods. Upgrade your account or change the project settings to increase the quota."

superagi-celery-1 | [2023-06-05 20:47:32,251: WARNING/ForkPoolWorker-7] Exception Occured in celery job superagi-celery-1 | [2023-06-05 20:47:32,252: WARNING/ForkPoolWorker-7] (400) superagi-celery-1 | Reason: Bad Request superagi-celery-1 | HTTP response headers: HTTPHeaderDict({'content-type': 'text/plain; charset=UTF-8', 'date': 'Mon, 05 Jun 2023 20:47:32 GMT', 'x-envoy-upstream-service-time': '688', 'content-length': '131', 'server': 'envoy'}) superagi-celery-1 | HTTP response body: The index exceeds the project quota of 1 pods by 1 pods. Upgrade your account or change the project settings to increase the quota. superagi-celery-1 | [2023-06-05 20:47:32,292: INFO/ForkPoolWorker-7] Task execute_agent[e07583db-9a94-48d8-ba34-4cfeda2b8352] succeeded in 3.0531644560032873s: None

mhsekr commented 1 year ago

@luciferlinx101

i am having this issue now

repro goals & tools:

  1. You are an expert blog writter, specialised in writting about all that related to online betting.

  2. You answer to Kiki requests, make sure to clarify with her before answering.

  3. You will use a guide file input, an example input, as well as google search in order to create content based on a new topic.

  4. You extract all the instructions by requesting Kiki to input INSTRUCTIONS FILE, and propose an EXAMPLE FILE as an option.

  5. You produce a blog that fulfills the instructions and the file formatting. Then you wait for ADDITIONAL COMMENTS input as an option.

  6. You need to make sure that all the information that you write is confired from the web search first before proposing a response.

Tools assigned: Read File Write File GoogleSearch GoogleSerp Human

hope this helps. issue occurs with other goals and tools as well.

schobele commented 1 year ago

Fixing the Pinecone config solved the issue for me.

TsundokuJim commented 1 year ago

Fixing the Pinecone config solved the issue for me.

Where did you fix it? There doesn't seem to be a pinecone conf file and config.yaml only seems to have config options for redis and S3.

schobele commented 1 year ago

It's in the config.yaml

Also take a look at the agent_executor.py to change the Pinecone index as described in the README. Or name your index "super-agent-index1"

TsundokuJim commented 1 year ago

Well spotted! I checked the readme but it just looked like advertising for the project and every link led to the project website. I didn't notice the wall of text at the bottom with the installation tips (thought it was just legal boilerplate.)

Vonnegut1 commented 1 year ago

It's in the config.yaml

Also take a look at the agent_executor.py to change the Pinecone index as described in the README. Or name your index "super-agent-index1"

Could you help me make this fix as I am also stuck in the thinking loop. What needs to be fixed in the config.yaml? Also, I can't locate the agent_executor.py file.

Thank you.

TsundokuJim commented 1 year ago

Actually, the problem I was having wasn't caused by any of this. It turns out, the only issue was that GPT4 doesn't seem to be supported, so I had to switch to GPT3.5-turbo.

<What needs to be fixed in the config.yaml?> As long as you entered the correct APS keys for OpenAI, Pinecone, Google Search etc., the config.yaml file should be fine.

The agent_executor.py file is in the SuperAGI/superagi/jobs directory. The 'memory' entry should be:

memory = VectorFactory.get_vector_storage("PineCone", "super-agent-index1", OpenAiEmbedding())

as long as your index name is "super-agent-index1" (which mine was).

For me, everything seemed correct, but I still got the endless 'thinking' loop. Then I noticed it couldn't find ChatGPT4 in the logs. That was a headsmack moment. Changing it to 3.5-turbo fixed the problem.

Vonnegut1 commented 1 year ago

Actually, the problem I was having wasn't caused by any of this. It turns out, the only issue was that GPT4 doesn't seem to be supported, so I had to switch to GPT3.5-turbo.

<What needs to be fixed in the config.yaml?> As long as you entered the correct APS keys for OpenAI, Pinecone, Google Search etc., the config.yaml file should be fine.

The agent_executor.py file is in the SuperAGI/superagi/jobs directory. The 'memory' entry should be:

memory = VectorFactory.get_vector_storage("PineCone", "super-agent-index1", OpenAiEmbedding())

as long as your index name is "super-agent-index1" (which mine was).

For me, everything seemed correct, but I still got the endless 'thinking' loop. Then I noticed it couldn't find ChatGPT4 in the logs. That was a headsmack moment. Changing it to 3.5-turbo fixed the problem.

Yes indeed. 3.5 Turbo is working. It's still running but I did see these two messages: "Unknown tool 'WriteFile'. Please refer to the 'TOOLS' list for available tools and only respond in the specified JSON format." and "Unknown tool 'ReadFile'. Please refer to the 'TOOLS' list for available tools and only respond in the specified JSON format."

Separately, how do you kill a previous Run? My GPT-4 test is still grinding in my list.

TsundokuJim commented 1 year ago

To the left of the green "Run Again" button, there's a little three-dot menu (that kind of blends into the grey of the window). It has Pause and Delete options. I found I needed to Pause the run before I Deleted it, or the options become unresponsive.

Vonnegut1 commented 1 year ago

To the left of the green "Run Again" button, there's a little three-dot menu (that kind of blends into the grey of the window). It has Pause and Delete options. I found I needed to Pause the run before I Deleted it, or the options become unresponsive.

It's super easy to miss the ellipses. I have a pause and resume, but no delete. Pause worked, so that's a victory. Now just trying to find whether there was actually a file written with the result. I received the write/read errors above. I can't seem to find anything.

I do have read and write in my listed tools. Is there anything to change in read_file.py or write_file.py?

TsundokuJim commented 1 year ago

Yes indeed. 3.5 Turbo is working. It's still running but I did see these two messages: "Unknown tool 'WriteFile'. Please refer to the 'TOOLS' list for available tools and only respond in the specified JSON format." and "Unknown tool 'ReadFile'. Please refer to the 'TOOLS' list for available tools and only respond in the specified JSON format."

I'll need to check that myself when I get back to my PC. All I got for output was the AI generating bad JSON, then beating itself up for the bad JSON and promising to fix it, then creating more bad JSON, over and over and over.

sacredgrove23 commented 1 year ago

It's in the config.yaml

Also take a look at the agent_executor.py to change the Pinecone index as described in the README. Or name your index "super-agent-index1"

this worked for me! you're the real MVP. they should make sure to tell folks to change their index name to super-agent-index1 on the front page cuz im sure many will have this stuck on thinking problem

neelayan7 commented 1 year ago

We have removed the default dependency on Pinecone. Please pull main again and try. It'll work

kristiansDraguns commented 1 year ago

It doesnt get past thinking for me too. It shows some set ups from 3 h ago and just thinks. I tried to close both docker and visual coder and nothing changed, it still spat out the same and started "thinking"...

bstojkovic commented 1 year ago

I'm having the same issue. "Thinking..." forever and GET requests in backend. There is nothing interesting in Docker logs.

These are my Goals and Tools:

image

Tbh, I don't know what this can be used for, so I put in the first idea that popped into my mind (maybe a showcase of how this tool can be used and what it can be used for would be beneficial, or maybe I'm just too new to all this).

I think the issue is that I didn't create an index in Pinecone, because README doesn't note that an index should be set up (and I thought that the index will be set up automatically by create_index).

I would set up an index, but I don't know what configuration it should have (how many dimensions and what metric).

bstojkovic commented 1 year ago

I just tried setting up an index with 1 dimension and cosine metric and as soon as the index was set up by Pinecone, the agent started working!

WareeshaN commented 1 year ago

Try uncommenting the "openai.api_key = get_config("OPENAI_API_KEY")" in chat_completion method in openai.py

neelayan7 commented 1 year ago

Are you still facing the same issue?

neelayan7 commented 1 year ago

@aodrasa ?

iAbhinav commented 1 year ago

Actually, the problem I was having wasn't caused by any of this. It turns out, the only issue was that GPT4 doesn't seem to be supported, so I had to switch to GPT3.5-turbo.

<What needs to be fixed in the config.yaml?>

As long as you entered the correct APS keys for OpenAI, Pinecone, Google Search etc., the config.yaml file should be fine.

The agent_executor.py file is in the SuperAGI/superagi/jobs directory. The 'memory' entry should be:

memory = VectorFactory.get_vector_storage("PineCone", "super-agent-index1", OpenAiEmbedding())

as long as your index name is "super-agent-index1" (which mine was).

For me, everything seemed correct, but I still got the endless 'thinking' loop. Then I noticed it couldn't find ChatGPT4 in the logs. That was a headsmack moment. Changing it to 3.5-turbo fixed the problem.

Changing index name did not work for me. But changing tu gpt 3.5 turbo solved this for me.

But even with gpt3.5, it gave me the same initial prompt 3 times, then it got to its work.

RasmusN commented 1 year ago

I am also struggeling with this issue. I've tried both changing to GPT3.5-turbo and making sure Pinecone environment and API key is set without success.

I see this in my log

celery_1           | 2023-08-23 18:20:04 UTC - Super AGI - INFO - [/app/superagi/jobs/agent_executor.py:61] - Unable to setup the pinecone connection...
celery_1           | [2023-08-23 18:20:04,322: INFO/ForkPoolWorker-8] Unable to setup the pinecone connection...
celery_1           | 2023-08-23 18:20:04 UTC - Super AGI - INFO - [/app/superagi/jobs/agent_executor.py:81] - Exception in executing the step: ToolResponseQueryManager.__init__() got an unexpected keyword argument 'memory'
celery_1           | [2023-08-23 18:20:04,373: INFO/ForkPoolWorker-8] Exception in executing the step: ToolResponseQueryManager.__init__() got an unexpected keyword argument 'memory'      
celery_1           | [2023-08-23 18:20:04,378: INFO/MainProcess] Task execute_agent[5a2127d9-2664-49a7-9ab1-87adc8df6bd5] received
celery_1           | [2023-08-23 18:20:04,379: INFO/ForkPoolWorker-8] Task execute_agent[198ce82f-2fb8-4ed9-a830-42d93a5a730c] succeeded in 2.8044759000185877s: None
proxy_1            | 172.18.0.1 - - [23/Aug/2023:18:20:13 +0000] "GET /_next/webpack-hmr HTTP/1.1" 499 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
backend_1          | INFO:     172.18.0.7:50192 - "GET /agentexecutionfeeds/get/execution/38 HTTP/1.0" 200 OK
proxy_1            | 172.18.0.1 - - [23/Aug/2023:18:20:13 +0000] "GET /api/agentexecutionfeeds/get/execution/38 HTTP/1.1" 200 48 "http://localhost:3000/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36" "-"
celery_1           | [2023-08-23 18:20:19,460: WARNING/ForkPoolWorker-8] Handling tools import
celery_1           | 2023-08-23 18:20:19 UTC - Super AGI - INFO - [/app/superagi/worker.py:59] - Execute agent:2023-08-23T18:20:04.374085,38
celery_1           | [2023-08-23 18:20:19,565: INFO/ForkPoolWorker-8] Execute agent:2023-08-23T18:20:04.374085,38
gaurav274 commented 1 year ago

Reverted the commit, and it works for me!

Fluder-Paradyne commented 1 year ago

@gaurav274 thank you We are reverting 1105 parially only the part which is causing the error