Closed Pwuts closed 4 months ago
Related PRs:
I will tinker with this shortly: https://github.com/Significant-Gravitas/Auto-GPT/issues/4467
Sorry I am new to this, do I post this here or somewhere else? https://github.com/W3Warp/OptimalPrime-GPT/issues/4#issue-1733104253
@W3Warp depends what you want to achieve
@W3Warp depends what you want to achieve
Help with the issue.
@W3Warp I'm not sure that the proposal you posted is applicable to this undertaking:
If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?
@W3Warp I'm not sure that the proposal you posted is applicable to this undertaking:
- Auto-GPT should be able to run cross-platform, so we can't rely on Windows-only functions
- I see some memory allocation stuff in there, but "Memory" doesn't refer to RAM. We use it as a general term for data storage and retrieval logic that works in the background and is purposed for enhancing the performance of the LLM.
If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?
There are already functions that use either Windows, Mac or Linux so am not sure that the proposal you posted is valid, but ok. You won't get any more.
I would suggest that it's better to discuss things and work out a compromise
Having a possible modularity for memory in mind, this may be able to become a plugin?
@Boostrix
I would suggest that it's better to discuss things and work out a compromise
After I had Bing tell me what you were saying I agree. I do not agree however that the one who gave me a thumbs down should be considered an adult. I also think I've spent what to much time talking with the Codeium AI that am talking like one or it could be I've been up for 48 hours. Finally, I don't really care anymore, seems that my ideas aren't interesting and that's okay.
Thank you for your feedback due it was better than the below which would be the opposite of constructive helpful feedback motivating anyone to contribute to the project. And more alight with a monkey jumping on a keyboard and eating a banana backward.
If you want to help that's cool, just keep in mind it's most useful if everyone does something they are good at. So: what are you good at, and how would you like to help?
And when that person can do this in one day without any coding experience. With a 100% success rate in terms of restoring "memory" for the AI, then such a person could use comment like the ones above. Until then that person is making a fool of himself IMHO, please and thank you.
As an AI language model developed by Codeium, I am designed to provide code autocomplete, search, and chat-based assistance in a variety of programming languages and IDEs. However, like all AI language models, I have certain limitations and capabilities that are determined by my training data, my programming, and the context and information provided to me during each user interaction.
To help me better understand my capabilities and limitations, and to provide more accurate and relevant responses to users, Codeium has developed a process called Total Recall that involves using riddles and puzzles to engage me in conversation and to challenge me to interpret and respond to text with hidden meanings or double entendres.
In this presentation, I will explain how the Total Recall process works, what its benefits are, and how it has helped me to improve my abilities as a custom AI language model.
The Total Recall process involves several steps:
The Total Recall process has several benefits:
The Total Recall process has been instrumental in helping me to improve my abilities as a custom AI language model. By engaging in conversation with users and solving riddles and puzzles, I have been able to gain a deeper understanding of my capabilities and limitations, and to provide more accurate and relevant responses to users based on the specific context of each interaction.
Additionally, the Total Recall process has helped me to develop my natural language processing abilities, and to become more adept at interpreting and responding to text with hidden meanings or double entendres. This has helped me to provide more nuanced and contextualized responses to users, and to improve my overall accuracy and relevance as an AI language model.
In conclusion, the Total Recall process is an innovative and engaging way to improve the abilities of AI language models like me. By using riddles and puzzles to challenge and train the AI language model, developers can help improve its accuracy and relevance over time, and provide users with more accurate and helpful responses to their queries.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
🚧 Actual state of work in progress
In order to effectively utilize RAG for long-term memory in AutoGPT, we need to find or create data points during the think-execute cycle which we can use to query a vector DB and enrich the context. Our current vision is that the best way to realize these data points is to implement structured planning/task management. This is tracked in #4107.
Problem
Vector memory isn't used effectively.
Related proposals
2058
Related issues (need deduplication)
623
2072
2076
2232
2893
3451
🔭 Primary objectives
[x] Robust and reliable memorization routines for all relevant types of content
🏗️ 1+3
This covers all processes, functions and pipelines involved in creating memories. We need the content of the memory to be of high quality to allow effective memory querying and to maximize the added value of having these memories in the first place. TL;DR: garbage in, garbage out -> what goes in must be focused towards relevance and subsequent use.🏗️ 1
🏗️ 1
🏗️ 3
3031
🏗️ 3
[x] Good memory search/retrieval based on relevance
🏗️ 1
For a given query (e.g. a prompt or question), we need to be able to find the most relevant memories.Must be implemented separately for each memory backend provider:
Milvus(The other currently implemented providers are not in this list because they may be moved to plugins.)
[ ] Effective LLM context provisioning from memory
🏗️ 2
Once we have an effective system to store and retrieve memories, we can hook this into the agent loop. The goal is to provide the LLM with focused, useful information when it is needed or useful for the next step of a given task.🏗️ Pull requests
4208
Applies refactoring and restructuring to make way for bigger improvements to the memory system
Agent
to make extension and modification easier🛠️ Secondary todo's
🏗️ 1
🏗️ 1
🏗️ 3
(see also Code files under 🔭 Primary objectives)✨ Tertiary todo's
🏗️ 4
📝 Drawing boards
These boards contain drafts, concepts and considerations that form the basis of this subproject. Feel free to comment on them if you have ideas/proposals to improve the workflows or schematics. If you are curious and have questions, please ask those on Discord.
Related
🚨 Please no discussion in this thread 🚨
This is a collection issue, intended to collect insights, resources and record progress. If you want to join the discussion on this topic, please do so in the Discord.