Closed yackimoff closed 1 year ago
From my experience, there is no contextual memory when using the OpenAI API ( only when you're using the web interface), so I'm assuming this is the workaround the devs decided to use for this issue, but I'm not a dev on this project so I don't know for sure
Yeah, it is a workaround of context length limit. It's actually great idea. It's just not implemented thoughtfully.
I added some discussion of this issue at https://github.com/Significant-Gravitas/Auto-GPT/discussions/2847 before I saแบ this issue.
This issue has automatically been marked as stale because it has not had any activity in the last 50 days. You can unstale it by commenting or removing the label. Otherwise, this issue will be closed in 10 days.
This issue was closed automatically because it has been stale for 10 days with no activity.
โ ๏ธ Search for existing issues first โ ๏ธ
GPT-3 or GPT-4
Steps to reproduce ๐น
No response
Current behavior ๐ฏ
Look at the code here: https://github.com/Significant-Gravitas/Auto-GPT/blob/75baa11e8196d9cdb26d26bf971ce3f98ebdaee5/autogpt/chat.py#L85
To generate context for the request, autogpt currently fetches 10 most relevant "memories" to previous 10 messages without any filtering. It doesn't make sense, since these previous 10 messages are in the database, so they're essentially most relevant to themselves.
Expected behavior ๐ค
Your prompt ๐
Irrelevant.