mem0ai / mem0

The Memory layer for your AI apps
https://mem0.ai
Apache License 2.0
22.44k stars 2.07k forks source link

feat request: include metadata in prompt - add links to prompt to show references #467

Open ishaan1995 opened 1 year ago

ishaan1995 commented 1 year ago

🚀 The feature

Is there a plan to provide links from which embedded chunk was given? It would make the response better as we could reference and show the links in response as well. I saw internally in embeddings_queue we do store the link in metadata

Motivation, pitch

I am creating a bot for content written from writers and want to show original posts in the bot response.

cachho commented 1 year ago

including meta_data is definitely on the roadmap, in more capabilities than this.

I can't speak for an ETA.

cachho commented 1 year ago

If anyone implements this, please add a template variable and make it configurable as such.

justinlevi commented 1 year ago

Would love to try this framework out, but this is currently a deal breaker for me.

denisj commented 1 year ago

If anyone implements this, please add a template variable and make it configurable as such.

@cachho Why do you mean by template variable?

Could it be something like query and query_with_metadata?

smach commented 9 months ago

Would love to try this framework out, but this is currently a deal breaker for me.

Agree! Users need the ability to see original document chunks in order to check for accuracy (as well as possibly learn more about their question). Embedchain looks very useful , but I won't deploy RAG apps internally or for the general public unless users can view retrieved source document chunks.

deven298 commented 9 months ago

Would love to try this framework out, but this is currently a deal breaker for me.

Agree! Users need the ability to see original document chunks in order to check for accuracy (as well as possibly learn more about their question). Embedchain looks very useful , but I won't deploy RAG apps internally or for the general public unless users can view retrieved source document chunks.

Hi @smach, we provide a way to the users to retrieve source document chunks along with answer for query/chat method. Please refer to our docs to learn how to do that - https://docs.embedchain.ai/api-reference/app/query

You can use the citations flag to retrieve the retrieved source document chunks. Please feel free to reach out to us if you have more questions.

DrDavidL commented 5 months ago

On a Chat with PDF page of my repo, I use EmbedChain and process the source text from citations into a readable format for respones! Embedchain is wonderful and feel free to use my code if helpful! https://github.com/DrDavidL/family-chat

DrDavidL commented 5 months ago

I was concerned about sources when using web - my mistake! Works perfectly. Sample output:

CleanShot 2024-05-13 at 20 51 18@2x

socratic-irony commented 4 months ago

I would love to see this as well -- specifically, I believe the request is referring to what are sometimes called parenthetical citations, like this, which are sometimes in (author date: page) format:

Today is the third day of June (Foo 2023: 12). Tomorrow, it will be the fourth day of June, and the weather is forecast to be sunny (Bar 2012: 3).

References:

  • Foo, John. 2023.
  • Bar, Sally. 2012.

Of course, there are many different citation formats used in academia, but that could be sorted out later. Package paper-qa has parenthetical cites like this. I built a RAG pipeline locally that is able to track sources through the LLM calls, but I don't trust its reliability. I'm not sure about the best way to accomplish this but it would amplify the value of the package immensely.