kingjulio8238 / memary

Human Memory for AI Agents.
https://www.memarylabs.com
MIT License
1.14k stars 79 forks source link

Not able to understand how memary can act as an external memory. #45

Open rajib76 opened 2 weeks ago

rajib76 commented 2 weeks ago

Are there any examples which shows how i can integrated memary with my existing LLM chain or agents. Is it possible to integrate it as a separate module.

kingjulio8238 commented 2 weeks ago

Hi! Thank you for the question.

Yes, memary can be added as a module (pip install memary) to an agent. Recently we released #37 so that you can add/remove any tools meaning memary is compatible with any agent configuration. The related demo can be found under our release page (v0.1.3).

We will release a cookbook and interfaces to access all these features soon!

Hope this helps.

rajib76 commented 1 week ago

I went through the entire code, but it did not seem to me that this memory module can be generalized and applied to any agent as a separate component. Below are some of my observations, please let me know if my understanding is not correct

  1. The NEO4J is actually not used as a Graph memory. I did not see any operations where entity and their relationships are pushed to NEO4J from memory. It looks like the module assumes that there is a NEO4J graph already populated which is searched as part of the REACT agent
  2. I also did not see the logic of how the memory component is extracting the entities automatically from the response

I may be missing something but I will dig more to understand.

rawwerks commented 1 week ago

The NEO4J is actually not used as a Graph memory.

i would also like to clarify - is memary updating the neo4j KG? i assumed yes from the llamaindex webinar demo...but if not, this will change my approach.

kingjulio8238 commented 1 week ago

@rajib76 thanks for the questions.

  1. Neo4j is used as our current graph provider. This can be verified on line 90 in base_agent.py where the Neo4jGraphStore is initialized. Regarding memory being pushed to the graph please see write_back(line 226 in the same file) where we insert the agent's response into the graph and index using triplets. We don't assume that there is a prefilled KG but this is super helpful - we are actively working on memaryParse to allow parsing from multiple sources.

  2. In the same function write_back we index the response using triplets which are inserted into the knowledge graph (see line 234).

Hope this helps.

kingjulio8238 commented 1 week ago

@rawwerks Thanks for the question.

Yes we do update the KG as described in the webinar. memary's core implementation is an autogenerated graph based on agents prior responses. Please see above comment for code references of these implementations.

Hope this helps.

rajib76 commented 1 week ago

This is super helpful. I am outside now. Will take a look as soon as reach home. Can’t wait to take a look

rajib76 commented 1 week ago

I am probably still missing something. I looked at the base_agent.py. There are 3 things that I observed

  1. write_back is just creating the index which has the nodes in memory. It is using the llamaindex KnowledgeGraphIndex
    def write_back(self):
        documents = SimpleDirectoryReader(
            input_files=["data/external_response.txt"]
        ).load_data()

        KnowledgeGraphIndex.from_documents(
            documents,
            storage_context=self.storage_context,
            max_triplets_per_chunk=8,
        )

The above code still does not write in NEO4J

  1. I was expecting that the node extraction and writing into graph will be part of memory and not base_agent. If we add to base_agent, memory is not generalized
  2. The KnowledgeGraphIndex feature of llamaindex is non deterministic. It does not create the graphs correctly and is inconsistent. If we use this approach, we will have a dirty graph memory. I used this earlier to parse a resume, it did not do well to create a consistent graph

I would like to get some guidance and feedback on the above observations.