mem0ai / mem0

The Memory layer for your AI apps
https://mem0.ai
Apache License 2.0
21.66k stars 1.97k forks source link

[Mem0] Langchain model integration #1515

Open valentimarco opened 1 month ago

valentimarco commented 1 month ago

🚀 The feature

Would be great to have a langchain integration! Why? Is great to use official library/clients but at the same time I don't want to create 300 objects for different library for the same llm/embedder provider!

Langchain provide 3 simple methods (invoke for llm and embed_query or embed_documents for embedders) to interact with the model.

If likes, i would create a PR with a preview of what discuss!

Great work btw <3

Motivation, pitch

I want to integrate the mem0 library in a project where we use langchain (unfortunately XD). At the moment, the only way is to extract config.

AIWithShrey commented 1 month ago

I was looking into the same thing and found nothing on Mem0 integration with LangChain.

I think it'd be very useful and convenient to have this feature.

biniyam69 commented 1 month ago

I think this issue belongs in langchain repo

valentimarco commented 1 month ago

I think this issue belongs in langchain repo

The request is to integrate langchain models as mem0 model category, not integrate mem0 as langchain package...

Dev-Khant commented 1 month ago

Hey @valentimarco The reason for not using langchain here is we solely don't want to depend upon it but instead use libraries from the LLM and Vectordb providers. As by doing this we have complete flexibility on using the core features.

So can you please let us know what exact issues do you have while using Mem0 in a langchain application.

valentimarco commented 1 month ago

Hey @valentimarco The reason for not using langchain here is we solely don't want to depend upon it but instead use libraries from the LLM and Vectordb providers. As by doing this we have complete flexibility on using the core features.

So can you please let us know what exact issues do you have while using Mem0 in a langchain application.

The only reason is to don't have multiple copies of the same openAi library in memory that use the same model with the same params. I can undestand the flexibility and all the confort that provide become indipendent from large libraries like langchain, llamaindex or even haystack but is not great to have mutiple tipe of the same openAI client to intereact with the same model. This can be also beneficial to memory even if i don't have a benchmark for it!

My idea was to integrate a model class that when constructed, ask for an langchain model and uses it to interact with the library!

Maybe this can lead to problems where ppl asks "why this llm/embedder doens't work with this"