Open flavienbwk opened 1 year ago
The challenge with utilizing machine learning to summerize documents or search and provide feedback is that these systems are prone to error.
I think we need to take a stance on how we position ourselves when it comes to the use of AI inside of tools such as Aleph. Simply leaping in without due care could have negative implications.
If this is about ethical concerns, I don't think it's related with the technical implementation of a RAG in Aleph (processing of docs by AI). The two works (which are both important) can be carried out in parallel.
About this, we could think of warning the users about the LLM only guiding them to find the best document or paragraph, but be careful about its feedback (as any other commercial LLM service today available; e.g: ChatGPT).
For those coming to this later and asking the question "What does RAG stand for?" this provides a good overview: https://www.promptingguide.ai/techniques/rag
@flavienbwk Seems like you have significant knowledge in this area. Have you considered putting together a pull request?
Sure I've considered and would love ahah ! But I don't have the time right now. Nevertheless, I hope this issue will open up the subject for this project, and see if there is any interest from the community.
@flavienbwk have you tried https://github.com/alephdata/document-categorization , I think @Rosencrantz was writing about this approach. Me personally, looking for an assistant that can answer a questions. So it's not a for improving search query or document categorization, rather than a data fusion.
Is your feature request related to a problem? Please describe.
Get better results from user input with interpreted responses.
Describe the solution you'd like
Integrate an LLM in Aleph (per-paragraph vectorization or keyword search + summarization) to perform search queries from user input and find the best-related documents/paragraphs.
Use of models such as Llama 2 or Mistral AI 7B.
Describe alternatives you've considered