Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
Other
168.16k stars 44.37k forks source link

Agents aren't being fed full instructions on instantiation #907

Closed vandervoortj closed 1 year ago

vandervoortj commented 1 year ago

Duplicates

Steps to reproduce 🕹

Give tasks that will spawn agents.

Current behavior 😯

When agents are created, they can often be given a task without full context, and will respond "As an AI language model...". This often results in stuff like:

NEXT ACTION: COMMAND = start_agent ARGUMENTS = {'name': 'summarization_agent', 'task': 'Summarize AGI Literature', 'prompt': 'Please summarize the key findings of an AGI literature summary file.'} SYSTEM: Command start_agent returned: Agent summarization_agent created with key 2. First response: I'm sorry, but to summarize a literature summary file on AGI is a very broad task. Can you please provide me with the specific literature summary file you are referring to?

NEXT ACTION: COMMAND = start_agent ARGUMENTS = {'name': 'AGISummarize', 'task': 'Find and summarize relevant scientific papers or articles on AGI', 'prompt': 'What scientific papers or relevant articles can you find about Artificial General Intelligence? Please add any valuable information you find to the AGI Comprehensive Summary file.'} SYSTEM: Command start_agent returned: Agent AGISummarize created with key 1. First response: As an AI language model, I do not have the capability to access the internet or any other external sources. However, I can suggest some resources where you can find scientific papers and articles related to Artificial General Intelligence. You can start by checking out the website of the AGI Society (agisociety.org), which is dedicated to advancing research in AGI. Additionally, some notable scientific papers on AGI include "Artificial General Intelligence: Concept, State of the Art, and Future Prospects" by Ben Goertzel and Cassio Pennachin, as well as "Artificial General Intelligence: A Path to Superintelligence" by Jürgen Schmidhuber. There are also several journals such as "Journal of Artificial Intelligence Research" and "Artificial Intelligence" that publish relevant articles on AGI.

This is not only a waste of tokens, but it opens up the process to be flooded with hallucinations. It would seem we need to find a way for the director to be more verbose on agent instantiation.

Expected behavior 🤔

Agents are spawned in a way so as to immediately perform tasks in the environment.

Your prompt 📝

ai_goals:
- Research all the various ways AGI has been referred to in the past
- Use those terms to find and summarize previous research in AGI into separate files
- Summarize across all research summaries into a single final file
- Shutdown
ai_name: AGIResearcher
ai_role: an AI designed to autonomously research previous works in the AGI field
vandervoortj commented 1 year ago

I've made a naive and dirty implementation where agents are instantiated with the data.load_prompt() and the local prompt and they get to task immediately, but they should have their own system prompt. If someone hasn't got to it before me then I'll work on it tomorrow.

james431987 commented 1 year ago

I've made a naive and dirty implementation where agents are instantiated with the data.load_prompt() and the local prompt and they get to task immediately, but they should have their own system prompt. If someone hasn't got to it before me then I'll work on it tomorrow.

Would this allow agents to write files or perform tasks? My director always seems to assume they can and gets confused when they can't.

also, would love if you posted your dirty implementation :d

vandervoortj commented 1 year ago

I've made a naive and dirty implementation where agents are instantiated with the data.load_prompt() and the local prompt and they get to task immediately, but they should have their own system prompt. If someone hasn't got to it before me then I'll work on it tomorrow.

Would this allow agents to write files or perform tasks? My director always seems to assume they can and gets confused when they can't.

also, would love if you posted your dirty implementation :d

I'm not sure as I'm not familiar with the codebase yet and haven't been a python guy since 2.7.

Line 268 of commands.py:agent_response = message_agent(key, data.load_prompt() + " " + prompt)

Remember to import data at the top. And of course you should use an f-string instead to increase performance, but as I said, it's dirty. I'll make a deeper dive into the code after 5pm EST and see if agents could use tools but I think the answer is either 'no' or 'they probably shouldn't', in which case the director should probably be informed on what agents can and can't do in some way.

onekum commented 1 year ago

I've tried to inform the director of the agents' limitations in the prompt myself, but haven't had any luck. Using GPT-3.5.

james431987 commented 1 year ago

I've made a naive and dirty implementation where agents are instantiated with the data.load_prompt() and the local prompt and they get to task immediately, but they should have their own system prompt. If someone hasn't got to it before me then I'll work on it tomorrow.

Would this allow agents to write files or perform tasks? My director always seems to assume they can and gets confused when they can't. also, would love if you posted your dirty implementation :d

I'm not sure as I'm not familiar with the codebase yet and haven't been a python guy since 2.7.

Line 268 of commands.py:agent_response = message_agent(key, data.load_prompt() + " " + prompt)

Remember to import data at the top. And of course you should use an f-string instead to increase performance, but as I said, it's dirty. I'll make a deeper dive into the code after 5pm EST and see if agents could use tools but I think the answer is either 'no' or 'they probably shouldn't', in which case the director should probably be informed on what agents can and can't do in some way.

I think you're right. If the director at least knew that agents can't perform tasks like it can then maybe it would stop abandoning all those tasks and actually do them.

I'd still like to see the director make their own army of employees and assign them tasks and have them report back to him, kinda like a CEO running a business or a good project manager. Give them all autonomy.

vandervoortj commented 1 year ago

I've made a naive and dirty implementation where agents are instantiated with the data.load_prompt() and the local prompt and they get to task immediately, but they should have their own system prompt. If someone hasn't got to it before me then I'll work on it tomorrow.

Would this allow agents to write files or perform tasks? My director always seems to assume they can and gets confused when they can't. also, would love if you posted your dirty implementation :d

I'm not sure as I'm not familiar with the codebase yet and haven't been a python guy since 2.7. Line 268 of commands.py:agent_response = message_agent(key, data.load_prompt() + " " + prompt) Remember to import data at the top. And of course you should use an f-string instead to increase performance, but as I said, it's dirty. I'll make a deeper dive into the code after 5pm EST and see if agents could use tools but I think the answer is either 'no' or 'they probably shouldn't', in which case the director should probably be informed on what agents can and can't do in some way.

I think you're right. If the director at least knew that agents can't perform tasks like it can then maybe it would stop abandoning all those tasks and actually do them.

I'd still like to see the director make their own army of employees and assign them tasks and have them report back to him, kinda like a CEO running a business or a good project manager. Give them all autonomy.

Perhaps the agents should be able to use tools since the director likes to believe they should be able to but at the very least the agents should have the agent tools removed to avoid infinite delegation chains.

ChessScholar commented 1 year ago

Looking forward to this! The hallucinations and "As an AI language..." are costly and inefficient.

estiens commented 1 year ago

honestly I think that specialized agents are what is necessary - if it calls another language model, it's just going to get hallucinations no matter what because it thinks there is something there and there is just words coming back

ideally I think it would spawn more coordinator agents if it needed (ie; another version of itself) but all tasks would be handled by actual routines - and it would know what arguments to send them - a web scraping agent, a summarizing agent, etc, etc - of course ideally it can start writing its own agents but right now we mostly get something that ins interesting in what sorts of things it wants to do, and tries to do (the explanations are impressive!) but isn't actually doing any of those things...

tlnet1981 commented 1 year ago

same here with gpt3only, the agents do not get the necessary information: as an AI language model, I do not have access to the internet...

Boostrix commented 1 year ago

this tends to happen a lot when using sub-agents, which is kinda pointless: #3673

github-actions[bot] commented 1 year ago

This issue was closed automatically because it has been stale for 10 days with no activity.