Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
Other
168.1k stars 44.36k forks source link

Endless loop of trying to load a bad url. #3444

Closed codearranger closed 1 year ago

codearranger commented 1 year ago

⚠️ Search for existing issues first ⚠️

Which Operating System are you using?

Docker

Which version of Auto-GPT are you using?

Latest Release

GPT-3 or GPT-4?

GPT-4

Steps to reproduce 🕹

This just occasionally happens.

Current behavior 😯

It attempts to load a bad url endlessly over and over again. See log below

Expected behavior 🤔

It should return the error back to the requesting agent so that it can correct it's mistake

Your prompt 📝

# Paste your prompt here

Your Logs 📒

Error: The following AI output couldn't be converted to a JSON:
  Please execute the "google" command with the argument "input": "popular self-help and growth book topics". This should give us a list of potential book topics to explore.
NEXT ACTION:  COMMAND = browse_website ARGUMENTS = {'url': '<website_url>', 'question': '<what_you_want_to_find_on_website>'}
SYSTEM:  Command browse_website returned: Error: Message: invalid argument (Session info: headless chrome=112.0.5615.138) Stacktrace: #0 0x55adbc85ffe3 <unknown> #1 0x55adbc59ebc1 <unknown> #2 0x55adbc589446 <unknown> #3 0x55adbc5877f3 <unknown> #4 0x55adbc587c3d <unknown> #5 0x55adbc5a0b16 <unknown> #6 0x55adbc6158c5 <unknown> #7 0x55adbc5fc8c2 <unknown> #8 0x55adbc615232 <unknown> #9 0x55adbc5fc693 <unknown> #10 0x55adbc5cf03a <unknown> #11 0x55adbc5d017e <unknown> #12 0x55adbc821dbd <unknown> #13 0x55adbc825c6c <unknown> #14 0x55adbc82f4b0 <unknown> #15 0x55adbc826d63 <unknown> #16 0x55adbc7f9c35 <unknown> #17 0x55adbc84a138 <unknown> #18 0x55adbc84a2c7 <unknown> #19 0x55adbc858093 <unknown> #20 0x7fce693a2ea7 start_thread
Boostrix commented 1 year ago

there are several situations where the script may end up looping unnecessarily, e.g.: #1591

Given the open-ended nature of the problem, the "solution" might be to keep track of loops by incrementing a counter whenever an identical invocation has been previously executed (think of it like a hash of all arguments), if that number keeps growing - the script got clearly stuck, and it should consider interruption the loop and probably ask for human feedback.

zudsniper commented 1 year ago

Oh totally that’s a good idea

If i’m understanding correctly it could even be as simple as having as you said a counter, but just like 1 per autonomous actor and have an env env var for the number of loops before you just spend an extra API query to have AGPT ask itself if it is stuck in a loop and then hooks could be used as needed i suppose?

Honestly it’s not all entirely clear to me but the github extension you mentioned is a clever way to approach the problem without having to approach the problem, kudos to you

On Sun, Apr 30, 2023 at 06:47 Boostrix @.***> wrote:

there are several situations where the script may end up looping unnecessarily.

Given the open-ended nature of the problem, the "solution" might be to keep track of loops by incrementing a counter whenever an identical invocation has been previously executed (think of it like a hash of all arguments), if that number keeps growing - the script got clearly stuck, and it should consider interruption the loop and probably ask for human feedback.

— Reply to this email directly, view it on GitHub https://github.com/Significant-Gravitas/Auto-GPT/issues/3444#issuecomment-1529005391, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2U6HKTXABBAI477TFKOPLXDZGMJANCNFSM6AAAAAAXOTFQTA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- Sincerely, Jason P. McElhenney

Boostrix commented 1 year ago

I would have thought of it like a stack that is incremented/decremented as needed, and whenever the arguments are the same as before (just hash all args together), you would automatically be tracking the number of identical/redundant invocations. Obviously, it would also be interesting if the LLM response keeps being the same, because then you literally want to bail out

more generally, a loop/task that repeatedly doesn't yield any useful result, should probably trigger an experimentation mode, where the agent begins exploring the solution space, to come up with a few alternatives to accomplish its goal and see which of these are feasible (probably by being constrained via "quotas" while exploring the space: #3466 ).

This could then be a "research" phase, which could be restricted to xx minutes or xx API tokens (USD) etc

github-actions[bot] commented 1 year ago

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] commented 1 year ago

This issue was closed automatically because it has been stale for 10 days with no activity.