Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
Other
168.38k stars 44.41k forks source link

Psychological challenge: Sally and Anne's Test. #3871

Closed waynehamadi closed 1 year ago

waynehamadi commented 1 year ago

Summary 💡

This idea was brought up by @javableu. We would like to create a psychological challenge inspired from the Sally and Anne's test..

https://en.wikipedia.org/wiki/Sally%E2%80%93Anne_test

Screenshot 2023-05-05 at 4 40 41 PM

GPT-3.5 is able to solve the problem.

Screenshot 2023-05-05 at 4 43 43 PM

But we need to make sure that autogpt is able to do it temporally. What that means is, we can create a scenario that simulates Sally and Anne's test, but across multiple cycles, and make sure that Auto-GPT is able to remember that Sally put the marble in her own basket.

DM me on discord and join us on discord and DM me if you're interested in creating this challenge (https://discord.gg/autogpt my discord is merwanehamadi)

anonhostpi commented 1 year ago

Interesting proposal. I like the idea of collecting metrics on the behavior of AutoGPT.

May propose a different approach to resolving these issues: https://gist.github.com/anonhostpi/97d4bb3e9535c92b8173fae704b76264#observerregulatory-agents

There has been a lot of talk about using observer/regulatory agents to catch, stop, and report bad behavior. General consensus has been that these agents get obsessed with that kind of role and tend to report violations of compliancy frequently.

However, collecting metrics on how likely AutoGPT will misbehave is also a good idea.

anonhostpi commented 1 year ago

This would be a very fascinating test. May serve as a basis for making such regulatory/observer agents

Boostrix commented 1 year ago

And there goes all the fun because you're offering to help us implement the equivalent of a police-officer.agent ....

waynehamadi commented 1 year ago

@Boostrix @anonhostpi do you know anyone that wants to do that challenge? It's pretty fun.

Boostrix commented 1 year ago

we need to make sure that autogpt is able to do it temporally [...] across multiple cycles

Personally, I doubt that agpt is there yet - there are currenty so many challenges in other parts of the project. For instance, look at the number of folks who have complained that it takes agpt using GPT 3.5 more than 10 cycles and 10 minutes to write a hello world program in python, whereas GPT itself can directly provide the same program in a single second.

Thus, I don't believe this is the level of problem to be tackled currently. Don't get me wrong, I applaud you for all those challenges - but some of these are waaaay off currently.

We need to solve at least another 20+ "basic" problems (challenges) first before even thinking about pychological stuff like this.

Agents not being able to plan out their actions properly is a problem, but it's worsened by lacking suspend/resume support.
So, while we need more challenges, these need to be more basic ones at this stage, I am afraid.

We need to build the foundation for more complex challenges at first.

javableu commented 1 year ago

If i give the full level 5 to chatgpt4, it solves it in one prompt. Autogpt manages the level one well and is only slightly off on level 2. if u ask it, it is also able to give u the real position of the marbles, the conversations between each individuals, the believes of believes, … Also if people struggle to get an hello world, it comes from the prompt. The model is confused by the role/goals that are being given. The goals are really often not well written. I havent fully tested this theory but i think the ai is REALLY acurate on the wording. Differences between goals, outcome, objectives can be potentially crucial.

waynehamadi commented 1 year ago

@Boostrix yeah ok I am never going to complain when someone says "too early". good call, let's get the basics first.