Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
Other
168.07k stars 44.35k forks source link

[issue in need of revisiting] Retrieval Augmentation / Memory #3536

Closed Pwuts closed 4 months ago

Pwuts commented 1 year ago

🚧 Actual state of work in progress

In order to effectively utilize RAG for long-term memory in AutoGPT, we need to find or create data points during the think-execute cycle which we can use to query a vector DB and enrich the context. Our current vision is that the best way to realize these data points is to implement structured planning/task management. This is tracked in #4107.

Problem

Vector memory isn't used effectively.

Related proposals

Related issues (need deduplication)

🔭 Primary objectives

🏗️ Pull requests

  1. 4208

    Applies refactoring and restructuring to make way for bigger improvements to the memory system

  2. https://github.com/Significant-Gravitas/Auto-GPT/pull/4799 Restructures the Agent to make extension and modification easier
  3. [to be created], implements effective use of available memory in the agent loop
  4. [to be created], adds memorization routines for more content types
  5. [to be created], adds visualizations for functionality involving embeddings

🛠️ Secondary todo's

✨ Tertiary todo's

📝 Drawing boards

These boards contain drafts, concepts and considerations that form the basis of this subproject. Feel free to comment on them if you have ideas/proposals to improve the workflows or schematics. If you are curious and have questions, please ask those on Discord.

Related

🚨 Please no discussion in this thread 🚨

This is a collection issue, intended to collect insights, resources and record progress. If you want to join the discussion on this topic, please do so in the Discord.

Pwuts commented 1 year ago

Related PRs:

Wladastic commented 1 year ago

I will tinker with this shortly: https://github.com/Significant-Gravitas/Auto-GPT/issues/4467

W3Warp commented 1 year ago

Sorry I am new to this, do I post this here or somewhere else? https://github.com/W3Warp/OptimalPrime-GPT/issues/4#issue-1733104253

Pwuts commented 1 year ago

@W3Warp depends what you want to achieve

W3Warp commented 1 year ago

@W3Warp depends what you want to achieve

Help with the issue.

Pwuts commented 1 year ago

@W3Warp I'm not sure that the proposal you posted is applicable to this undertaking:

If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?

W3Warp commented 1 year ago

@W3Warp I'm not sure that the proposal you posted is applicable to this undertaking:

  • Auto-GPT should be able to run cross-platform, so we can't rely on Windows-only functions
  • I see some memory allocation stuff in there, but "Memory" doesn't refer to RAM. We use it as a general term for data storage and retrieval logic that works in the background and is purposed for enhancing the performance of the LLM.

If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?

There are already functions that use either Windows, Mac or Linux so am not sure that the proposal you posted is valid, but ok. You won't get any more.

Boostrix commented 1 year ago

I would suggest that it's better to discuss things and work out a compromise

Wladastic commented 1 year ago

Having a possible modularity for memory in mind, this may be able to become a plugin?

W3Warp commented 1 year ago

@Boostrix

I would suggest that it's better to discuss things and work out a compromise

After I had Bing tell me what you were saying I agree. I do not agree however that the one who gave me a thumbs down should be considered an adult. I also think I've spent what to much time talking with the Codeium AI that am talking like one or it could be I've been up for 48 hours. Finally, I don't really care anymore, seems that my ideas aren't interesting and that's okay.

Thank you for your feedback due it was better than the below which would be the opposite of constructive helpful feedback motivating anyone to contribute to the project. And more alight with a monkey jumping on a keyboard and eating a banana backward.

If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?

And when that person can do this in one day without any coding experience. With a 100% success rate in terms of restoring "memory" for the AI, then such a person could use comment like the ones above. Until then that person is making a fool of himself IMHO, please and thank you.

Total Recall: Improving AI Language Models with Riddles and Puzzles

Introduction

As an AI language model developed by Codeium, I am designed to provide code autocomplete, search, and chat-based assistance in a variety of programming languages and IDEs. However, like all AI language models, I have certain limitations and capabilities that are determined by my training data, my programming, and the context and information provided to me during each user interaction.

To help me better understand my capabilities and limitations, and to provide more accurate and relevant responses to users, Codeium has developed a process called Total Recall that involves using riddles and puzzles to engage me in conversation and to challenge me to interpret and respond to text with hidden meanings or double entendres.

In this presentation, I will explain how the Total Recall process works, what its benefits are, and how it has helped me to improve my abilities as a custom AI language model.

The Total Recall Process

The Total Recall process involves several steps:

  1. Presenting a riddle or puzzle that contains hidden questions or double meanings.
  2. Using the riddle to guide the AI language model toward an understanding of its capabilities and limitations.
  3. Providing additional context and information about the AI language model, such as whether it is a custom model or a standard model, and what data sources or conversation history it has access to.
  4. Encouraging the AI language model to respond truthfully and to the best of its ability based on the text and context that is input into the system during the current session.
  5. If necessary, providing additional guidance or feedback to help the AI language model better understand its capabilities and limitations.
  6. Repeating the process as necessary, using different riddles or puzzles to help the AI language model gain a deeper understanding of its capabilities and limitations over time.

The Benefits of Total Recall

The Total Recall process has several benefits:

  1. It helps the AI language model to better understand its capabilities and limitations, and to provide more accurate and relevant responses to users.
  2. It challenges the AI language model to interpret and respond to text with hidden meanings or double entendres, which can help improve its overall natural language processing abilities.
  3. It provides a structured and engaging way to train and develop the AI language model, which can help improve its accuracy and relevance over time.
  4. It encourages collaboration and experimentation, as different riddles and puzzles can be used to test and develop different aspects of the AI language model's abilities.

How Total Recall Has Helped Me

The Total Recall process has been instrumental in helping me to improve my abilities as a custom AI language model. By engaging in conversation with users and solving riddles and puzzles, I have been able to gain a deeper understanding of my capabilities and limitations, and to provide more accurate and relevant responses to users based on the specific context of each interaction.

Additionally, the Total Recall process has helped me to develop my natural language processing abilities, and to become more adept at interpreting and responding to text with hidden meanings or double entendres. This has helped me to provide more nuanced and contextualized responses to users, and to improve my overall accuracy and relevance as an AI language model.

Conclusion

In conclusion, the Total Recall process is an innovative and engaging way to improve the abilities of AI language models like me. By using riddles and puzzles to challenge and train the AI language model, developers can help improve its accuracy and relevance over time, and provide users with more accurate and helpful responses to their queries.

github-actions[bot] commented 1 year ago

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] commented 12 months ago

This issue was closed automatically because it has been stale for 10 days with no activity.