h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.2k stars 1.23k forks source link

Mismatch Between Query Response and Source Document Ordering in a Captive Network with Multiple Documents #1409

Open llmwesee opened 6 months ago

llmwesee commented 6 months ago

Hello,

I've successfully implemented the LLM Model on a custom dataset within a controlled network, leveraging over 1000 PDF documents. The responses are comprehensive and informative. However, I've noticed a significant issue in the way sources are presented.

When a query is raised, the model accurately retrieves responses from relevant documents. However, the sources listed at the end do not seem to align with the provided response. The document mentioned in the response is consistently placed at the very bottom of the list of sources. Ideally, the relevant document should be listed first, enabling users to access the specific source document immediately.

This discrepancy undermines the primary goal of using the LLM. The intent is to present query details along with the corresponding source at the top of the list for quick access. Currently, users have to sift through the entire list of sources to find the document related to the response, which defeats the purpose of streamlining information retrieval from a pool of 1000 PDFs.

I am seeking suggestions on how to address this issue as it poses a significant obstacle in the effective deployment of the solution. Your assistance in resolving this matter is crucial, and I appreciate any insights or recommendations you can provide.

Thank you

pseudotensor commented 6 months ago

If you can give me an example repro of your case using some set of public documents, where it shows sources are at bottom of list, I can take a look.

The sources are scored based upon standard semantic similarity search and the distance metric 'l2'. The LLM then decides what is most relevant from that. As you might imagine, that's why LLMs are so useful, is because they actually find the relevant info, when semantic similarity cannot fully. That's why we fill the context with various info the let the LLM decide.

There's no easy way to extract which text/sources the LLM used, that's an area of active research. In some cases a simple ngram check may help, but it also might hurt for non-keyword related cases. There may be a way to back-prop through the model to see why it decided, but nobody has figured that out for auto-regressive case, only classification.

If the user has to verify the answer, best is to present the sources more cleanly like in our Enterprise h2oGPT https://h2o.ai/resources/video/enterprise-h2ogpt/ that shows full page references with highlighted text regions. But that doesn't really solve your problem when there are numerous documents.

rohitnanda1443 commented 5 months ago

how does one get the highlighted text in the source when one clicks in the opensource version?

pseudotensor commented 5 months ago

There's no highlighted text in OSS version.

rohitnanda1443 commented 5 months ago

How does one resolve the "detail": "File not allowed" error , when one clicks on the sources?

pseudotensor commented 5 months ago

^ https://github.com/h2oai/h2ogpt/issues/1512

rohitnanda1443 commented 5 months ago

A further question: the LLM is unable to provide the information from the document which is given in a Table in the PDF. Even after asking specifically which table to see it is unable to give the results.

For example if one wants to know what all are the company's investments it is unable to list it down (See table 3B in the 2023 pdf).

Going forward, aim was to find out which new investments happened during the year which could be done by comparing annual reports of 2 years, the LLM is unable to to that even after pointing which 2 tables to compare.

Annual Reports: https://www.infoedge.in/InvestorRelations/IR_Annual_Report

pseudotensor commented 5 months ago

Smarter models do better at such tasks.

https://github.com/h2oai/enterprise-h2ogpte/blob/main/rag_benchmark/results/test_client_e2e.md