Open dnl13 opened 1 month ago
Hi Daniel,
Thank you so much for your kind words!
Integrating it directly into the experimental Memory feature of Open-WebUI would be a significant enhancement! I've been considering opening a feature request on the GitHub page for this very idea. Just imagine how powerful it would be for the LLM to automatically capture and memorize information as you interact with it in real-time!
It's also worth noting that the tool is still in its experimental release phase, and there's plenty of room for improvement. e.g, The LLM has the potential to handle multiple memories per request, which could greatly enhance user experience!
I plan to open that feature request soon—possibly later this week. Thanks for your great idea !
Your feedback is incredibly valuable, appreciate your engagement with this project. It has so much potential for growth and innovation!
Hi again Daniel! I have been figuring out on thistool on Open WebUI, which is an internal tool(i.e it doesn't interact with the LLM directly).
however it can be used in integration with our tool; in a way the json is formatted as same as the db schema in the memories.py code and then saved in the same directory as the db files! I will later see if it is possible or not, cause if so the whole memory functionalities like add to memory would change.
I'm currently implementing new features like new json files in GPT4 Memory Mimic tool ! Feel free to try and report issues,would appreciate that!
Thank you so much!
Just for your interest,
I have been working on this tool on Open WebUI, which is an internal tool (i.e., it doesn't interact with the LLM directly).
I believe that's not quite accurate. I’ve tested this across multiple chat windows with different local models (I'm not using the OpenAI API at the moment), and the memory is available in all of them. However, I’m not entirely sure how it’s being added to the model—perhaps through context, RAG, or some other method.
It seems fairly straightforward to utilize these classes—Memories, MemoriesTable—for this purpose.
If you don’t mind, I’d like to share my approach to utilizing this built-in functionality, starting with reading and adding memories.
class Tools:
...
async def recall_memories(
self, __user__: dict, __event_emitter__: Callable[[dict], Any] = None
) -> str:
"""
Retrieve all stored memories from the user's memory vault and provide them to the user. Be accurate and precise. Do not add any additional information. Always use the function to access memory or memories. If the user asks about what is currently stored, only return the exact details from the function. Do not invent or omit any information.
:return: A numeric list of all memories. You MUST present the memorys to the user as text. It is important that all memorys are displayed without omissions. Please show each memory entry in full!
"""
# get user id
self.user_id = __user__.get("id")
if __event_emitter__:
await __event_emitter__(
{
"type": "status",
"data": {
"description": "Retrieving stored memories.",
"done": False,
},
}
)
# update user_memory
self.user_memory = MemoryManager.refresh_user_memory(
self, self.user_memory, self.user_id
)
# get ammount of memories
self.user_memory_count = len(self.user_memory)
if not self.user_memory:
message = "No memory stored."
if __event_emitter__:
await __event_emitter__(
{
"type": "status",
"data": {"description": message, "done": True},
}
)
return json.dumps({"message": message}, ensure_ascii=False)
if __event_emitter__:
await __event_emitter__(
{
"type": "status",
"data": {
"description": f"{self.user_memory_count} memories loaded ",
"done": True,
},
}
)
# format user_memory
content_list = [
f"{index}. {memory.content}"
for index, memory in enumerate(
sorted(self.user_memory, key=lambda memory: memory.created_at), start=1
)
]
return f"Memories from the users memory vault: {content_list} "
async def add_memory(
self,
input_text: str,
__user__: dict,
__event_emitter__: Callable[[dict], Any] = None,
) -> str:
"""
Add a new entry to the user's memory vault. Always use the function to actually store the data; do not simulate or pretend to save data without using the function. After adding the entry, retrieve all stored memories from the user's memory vault and provide them accurately. Do not invent or omit any information; only return the data obtained from the function. Do not assume that any input text already exists in the user's memories unless the function explicitly confirms that a duplicate entry is being added. Simply acknowledge the new entry without referencing prior content unless it is confirmed by the memory function.
:params input_text: The TEXT .
:returns: A numeric list of all memories. You MUST present the memorys to the user as text. It is important that all memorys are displayed without omissions. Please show each memory entry in full!
"""
if __event_emitter__:
await __event_emitter__(
{
"type": "status",
"data": {
"description": "Adding entry to the memory vault.",
"done": False,
},
}
)
# get user id
self.user_id = __user__.get("id")
# insert new memory to the database
status = MemoriesTable.insert_new_memory(
Memories, self.user_id, input_text.encode("utf-8")
)
# update user_memory
self.user_memory = MemoryManager.refresh_user_memory(
self, self.user_memory, self.user_id
)
# format user_memory
content_list = [
f"{index}. {memory.content}"
for index, memory in enumerate(
sorted(self.user_memory, key=lambda memory: memory.created_at), start=1
)
]
if __event_emitter__:
await __event_emitter__(
{
"type": "status",
"data": {
"description": f"Added entry to the memory vault.",
"done": True,
},
}
)
return f"added to the users memory vault new memories are: {content_list}"
...
I understand that this method only retrieves individual entries from memory and does not allow for tagging and other features like in your tool; however, one could consider extending the Memory class with custom values that may not be visible in the UI settings but could assist the LLM in loading and editing specific memories by tag.
Perhaps this might help you understand the built-in Files class that you haven't utilized yet. I wanted to see what is being stored in your memory.json, but I could only find it after accessing the Docker container, as I was also curious about where it’s stored. It didn’t appear in my volume.
It's unfortunate that Open-WebUI currently lacks extensive documentation about its backend, but you can navigate through it with the following link: https://github.com/open-webui/open-webui/blob/main/backend/open_webui/apps/webui/models/files.py
Thank you for your time and consideration! I hope these insights prove helpful as you continue to develop this tool. I'm looking forward to any updates or discussions that may arise from this. Please feel free to reach out if you have any questions or need further clarification!
Best regards, dnl13
Sorry, I forgot to mention my imports:
from open_webui.apps.webui.models.memories import Memories, MemoriesTable
from open_webui.apps.webui.models.users import User
Just for your interest,
I have been working on this tool on Open WebUI, which is an internal tool (i.e., it doesn't interact with the LLM directly).
I believe that's not quite accurate. I’ve tested this across multiple chat windows with different local models (I'm not using the OpenAI API at the moment), and the memory is available in all of them. However, I’m not entirely sure how it’s being added to the model—perhaps through context, RAG, or some other method.
It seems fairly straightforward to utilize these classes—Memories, MemoriesTable—for this purpose.
If you don’t mind, I’d like to share my approach to utilizing this built-in functionality, starting with reading and adding memories. . . .
now i get that! you are using memories class in our tool! cool! for the rest week I might not be focused on this great update! could you see if it is possible for implementing this class in the last update we released? (the "download memories" update I mean); if so, could you put a PR in tools/ folder , in a new tool? would be great. if not i would work on this in the next two weeks(God Willing).
Thanks allot daniel! iI hope my assumption of what you said was true.
Hey @mhioi ,
Unfortunately, I’m a bit busy this week too, but I will make sure to send you a PR during my free time.
It might be a good idea to create another branch for us to work on, as we can refactor your code to the built-in classes without affecting the current setup. This way, we can also consider what additional valves and user valves we might need, as well as explore further strategies that could facilitate a smooth integration.
But yes, I will try to gradually adapt your code to the classes. 😉
thank you so much!
i will do my best!
Hi @dnl13
been working on implementing the tool on MemoriesTable and it rocks! thanks for your great code here! i'm gonna make a new branch(don't know what to name it) and there should be the code you provided; I also made a tweak and added a memory remove feature, don't know that's a good idea or not, I think not; cause the llm would have full control of the memories and we don't want this.
I really appreciate if you give any ideas on new features! cause all features we could implement are adding ,retrieving and (maybe not a good one) deleting memories.
oh! I was also thinking to upload a new tool to open webui community for this new MemoriesTable thing.
thanks for your help!
Hi @mhioi
thanks for the feedback! I'm glad to hear the tool is working well with MemoriesTable. Sorry I haven’t submitted a PR yet—I wanted to thoroughly test the functionalities and explore various models before moving forward.
After extensive testing, I’ve found that memory-related functions vary significantly with different models. Many LLMs struggle to identify the correct function—especially in non-English languages. For instance, in German, using "merken" (remember) doesn’t always trigger the right action; sometimes "erinnere dich" (keep in mind) works better.
I suspect that the naming of functions, as well as precise prompting, plays a crucial role in how well these models respond.
I also experimented with a delete feature but ended up removing it. I agree that giving the model full control over memories might not be ideal. My thought was to use a tag or variable in storage to differentiate between "user memories" and "LLM memories." I’m still working on this concept, but I’m currently fine-tuning prompts to handle this functionality effectively.
For me, I added this to consider a better approach: From the docs: "What are Tools? - Generally speaking, your LLM of choice will need to support function calling for tools to be reliably utilized."
Experimenting with Model Behavior:
I’ve been experimenting with how to achieve a balanced approach for memory storage. Some models start spamming memories, which isn’t ideal. Here, dividing memories into user-generated and LLM-generated categories could be helpful.
Additionally, I noticed that models don’t always reliably use the memory function; sometimes, they pretend to store information but actually don’t. They might retain some memory from the chat history, but once the memory recall function is used, those chat-based memories are deprioritized and often forgotten.
I find this topic fascinating, and I really appreciate your approach! But I need more testing and experience to influence models more effectively. For some, I even had to provide a system prompt to make them aware of the memory function—otherwise, they refused to use tools (possibly due to an embedded system prompt).
So that’s been my project for the past few nights! 😄
Hi Daniel ! thanks for your great feedback! those are great for making this tool the best!
thanks for the feedback! I'm glad to hear the tool is working well with Memories-Table. Sorry I haven’t submitted a PR yet—I wanted to thoroughly test the functionalities and explore various models before moving forward.
thanks to you for your bright idea in memories table!
After extensive testing, I’ve found that memory-related functions vary significantly with different models. Many LLMs struggle to identify the correct function—especially in non-English languages. For instance, in German, using "merken" (remember) doesn’t always trigger the right action; sometimes "erinnere dich" (keep in mind) works better.
I suspect that the naming of functions, as well as precise prompting, plays a crucial role in how well these models respond.
good point; one way is to use a good model; like GPT4 . and the other approach is to use a prompt to tell it use that in any language(haven't tested it yet,might get things done) .We have to accept that the model is a crucial thing; like llama3.2 is not same as the llama3.1 8B. So, comparison between GPT4o would be ridiculous...
I also experimented with a delete feature but ended up removing it. I agree that giving the model full control over memories might not be ideal. My thought was to use a tag or variable in storage to differentiate between "user memories" and "LLM memories." I’m still working on this concept, but I’m currently fine-tuning prompts to handle this functionality effectively.
Yeah! I have an idea! look: What if we mix two approaches?! we save the memory both to Open WebUI and also in our own built JSON files.Hence, we give the LLM to ONLY delete the JSON files and update those! while,the memories are always saved in the internal Memories-Table. Would be glad to know your idea and feedback!
For me, I added this to consider a better approach: From the docs: "What are Tools? - Generally speaking, your LLM of choice will need to support function calling for tools to be reliably utilized."
Experimenting with Model Behavior:
I’ve been experimenting with how to achieve a balanced approach for memory storage. Some models start spamming memories, which isn’t ideal. Here, dividing memories into user-generated and LLM-generated categories could be helpful.
I think that's fine; cause a model like llama3.2 3B, is like the baby of llama 405B. so, the baby is innocent while he doesn't know how to integrate its own memories!
Additionally, I noticed that models don’t always reliably use the memory function; sometimes, they pretend to store information but actually don’t. They might retain some memory from the chat history, but once the memory recall function is used, those chat-based memories are deprioritized and often forgotten.
I encountered that before also. the approach was to explicitly tell the local LLM to store that(remember that), while the other ones like GPT4o , didn't find those information crucial to store in the memories! I think building an agent for this, ONLY for considering a memory is crucial or not, is the way to handle this.
I find this topic fascinating, and I really appreciate your approach! But I need more testing and experience to influence models more effectively. For some, I even had to provide a system prompt to make them aware of the memory function—otherwise, they refused to use tools (possibly due to an embedded system prompt).
Thank you so much!
The ways to handle better usage of memories thing, are: 1)The Larger the model is, The Better Function Calling is! 2)System prompt is a way to let the LLM know it has a tool called functions! 3)Telling the LLM to directly store the memories , is another way! 4)In tool's prompts! the better the prompts are, the better the LLM knows the ways! I tested these ways and got better responses!
unfortunately, I'm really busy to capture a tutorial on how-to things or use this tool; or even update the README for this purpose. My plan is to integrate the tool you made , with extensive prompt details, to the tool that is now available in this link. The idea is to let the LLM save memories to the memories-table, whilst saving to JSON-memories internally. BUT the point is the LLM has the ability to delete the JSON-memories, BUT NOT for Memories-Table. I think somehow people will struggle to manage these memories by a tool like ours. even improving the tool as best as it could be wouldn't make the memory-thing for the LLM( like GPT4) good and the most usable; UNTIL this tool is made builtin in the Open WebUI . which like titling each chat in the UI, the UI sends every message to a complete separate agent which handles the relations between existing memories and the new memory. This would make it the best for memories handling!
So that’s been my project for the past few nights! 😄
thank you a lot for your sense of responsibility ! I really appreciate that!
Nice Tool.
Maybe it is possible to integrate it directly into the expermimental Memory of Open-Webui ?
https://github.com/open-webui/open-webui/blob/main/backend/open_webui/apps/webui/models/memories.py
From the Docs: 🧠Memory Feature: Manually add information you want your LLMs to remember via the Settings > Personalization > Memory menu. Memories can be added, edited, and deleted.