Significant-Gravitas / AutoGPT

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
https://agpt.co
MIT License
166.94k stars 44.17k forks source link

Add Recency and Importance for Memory Retrieval #2893

Closed younghuman closed 1 year ago

younghuman commented 1 year ago

Duplicates

Summary 💡

From this paper: https://arxiv.org/abs/2304.03442, it makes sense to consider the importance and recency of the memory when retrieving, not only the semantic relevance as implemented today.

If we treat AutoGPT as a functional human-like agent, this makes sense as the very old memories and trivial memories should be discounted when retrieved.

The formula in the paper is a heuristics: Retrival_score = recency importance relevance.

If we add this into the Roadmap I can help with the implementation.

Examples 🌈

No response

Motivation 🔦

No response

Boostrix commented 1 year ago

The formula in the paper is a heuristics: Retrival_score = recency importance relevance. If we add this into the Roadmap I can help with the implementation.

Indeed, that's interesting.

I was thinking about using a subset of this idea for an MRU/LRU list of commands, i.e. specifically in the context of executing commands - but with a focus on maintaining a history of previous commands, and differentiating between those that worked/didn't work. To hopefully come up with a list of tailored/relevant command candidates.

With ideas like #3686 (that may potentially add a ton of commands), it seems even more important to rethink commands and how the system should provide options to the LLM, with a focus in progressing with its objectives.

This command buffer could be extended by also providing a contextual history for each command. That way, a much more specific list of command candidates could be provided depending on the context, with the option to retrieve/customize a command that was previously executed: https://github.com/Significant-Gravitas/Auto-GPT/issues/2987#issuecomment-1531131136

Maybe, that would be a good starting point (testbed) to tinker with the idea, what do you think ?

github-actions[bot] commented 1 year ago

This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.

github-actions[bot] commented 1 year ago

This issue was closed automatically because it has been stale for 10 days with no activity.

rogerssam commented 6 months ago

Duplicates

  • [X] I have searched the existing issues

Summary 💡

From this paper: https://arxiv.org/abs/2304.03442, it makes sense to consider the importance and recency of the memory when retrieving, not only the semantic relevance as implemented today.

If we treat AutoGPT as a functional human-like agent, this makes sense as the very old memories and trivial memories should be discounted when retrieved.

The formula in the paper is a heuristics: Retrival_score = recency importance relevance.

If we add this into the Roadmap I can help with the implementation.

Examples 🌈

No response

Motivation 🔦

No response