Closed joelalcedo closed 1 year ago
have a command to start a program plus send prompt Program is
1) a gpt3.5 used to create search tags relevant to the prompt 2) use that to run a search 3)feed search results plus prompt to a gpt3.5 to chose top5 most relevant results to the prompt 4) scrape top5 5) feed to a gpt3.5 +prompt to summarize info related to the prompt 6)export to autogpt or other
I think this is broken. If it it going to spin up a bot it should be another autogpt bot with code execution capabilities, etc.
Command evaluate_code returned: Error: The model: 'gpt-4' does not exist.
I haven't yet been able to successfully complete an auto-gpt request because of this error. It's been an underwhelming experience.
@Jefferydo are you using GPT-3 mode? https://github.com/Significant-Gravitas/Auto-GPT#gpt35-only-mode
Plugins and other issues cover this topic well. Closing
Duplicates
Summary 💡
Sometimes GPT agents will be prompted with a request to browse the internet or go beyond its capabilities (e.g. running a command/executable, generate an image, etc.). to fulfill a subtask. Example below will produce some examples of this.
This results in some recursive loops or buggy unintended behavior downstream that ends rendering the program less useful.
My feature request would be something to effectively control for these limitations. Also worth keeping in mind that with chat plugins (see docs here: https://platform.openai.com/docs/plugins) there could be some nice use cases where a GPT agent could search for plugins it has access to, and use that as a means to complete tasks.
You would effectively have to assume that GPT agents are can not access the internet, generate executables, etc.
Main limitations of GPT Agents:
Some of the main limitations of GPT-3 include:
Lack of contextual understanding: GPT-3 may not fully understand the context or nuance of certain questions or statements, which can lead to incorrect or nonsensical responses.
Absence of common sense reasoning: GPT-3 may struggle with tasks that require common sense or intuitive understanding, as its knowledge is derived from the text it has been trained on, rather than any innate reasoning capability.
Inability to remember: GPT-3 has no built-in memory of previous interactions, which can lead to inconsistencies in its responses, or difficulty in maintaining a coherent conversation over an extended period.
Lack of real-time learning: GPT-3 cannot learn new information or update its knowledge base in real-time, as it is based on a static dataset.
Inability to fact-check: GPT-3 cannot verify the accuracy of information, and may provide outdated or incorrect information based on the data it was trained on.
Thoughts?
Examples 🌈
purpose: you are an autonomous AI responsible for delegating three GPT agents to write programs and browse the internet for random requests.
Goal 1: generate three random code generation, image generation, or internet research requests. Goal 2: delegate these tasks to three separate GPT agents Goal 3: record their initial response & write to file Goal 4: take no action after their initial response Goal 5: shut down when complete
Motivation 🔦
No response