Closed codearranger closed 1 year ago
there are several situations where the script may end up looping unnecessarily, e.g.: #1591
Given the open-ended nature of the problem, the "solution" might be to keep track of loops by incrementing a counter whenever an identical invocation has been previously executed (think of it like a hash of all arguments), if that number keeps growing - the script got clearly stuck, and it should consider interruption the loop and probably ask for human feedback.
Oh totally that’s a good idea
If i’m understanding correctly it could even be as simple as having as you said a counter, but just like 1 per autonomous actor and have an env env var for the number of loops before you just spend an extra API query to have AGPT ask itself if it is stuck in a loop and then hooks could be used as needed i suppose?
Honestly it’s not all entirely clear to me but the github extension you mentioned is a clever way to approach the problem without having to approach the problem, kudos to you
On Sun, Apr 30, 2023 at 06:47 Boostrix @.***> wrote:
there are several situations where the script may end up looping unnecessarily.
Given the open-ended nature of the problem, the "solution" might be to keep track of loops by incrementing a counter whenever an identical invocation has been previously executed (think of it like a hash of all arguments), if that number keeps growing - the script got clearly stuck, and it should consider interruption the loop and probably ask for human feedback.
— Reply to this email directly, view it on GitHub https://github.com/Significant-Gravitas/Auto-GPT/issues/3444#issuecomment-1529005391, or unsubscribe https://github.com/notifications/unsubscribe-auth/AD2U6HKTXABBAI477TFKOPLXDZGMJANCNFSM6AAAAAAXOTFQTA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- Sincerely, Jason P. McElhenney
I would have thought of it like a stack that is incremented/decremented as needed, and whenever the arguments are the same as before (just hash all args together), you would automatically be tracking the number of identical/redundant invocations. Obviously, it would also be interesting if the LLM response keeps being the same, because then you literally want to bail out
more generally, a loop/task that repeatedly doesn't yield any useful result, should probably trigger an experimentation mode, where the agent begins exploring the solution space, to come up with a few alternatives to accomplish its goal and see which of these are feasible (probably by being constrained via "quotas" while exploring the space: #3466 ).
This could then be a "research" phase, which could be restricted to xx minutes or xx API tokens (USD) etc
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
⚠️ Search for existing issues first ⚠️
Which Operating System are you using?
Docker
Which version of Auto-GPT are you using?
Latest Release
GPT-3 or GPT-4?
GPT-4
Steps to reproduce 🕹
This just occasionally happens.
Current behavior 😯
It attempts to load a bad url endlessly over and over again. See log below
Expected behavior 🤔
It should return the error back to the requesting agent so that it can correct it's mistake
Your prompt 📝
Your Logs 📒