J4NN0 / llm-rag

LLMs prompt augmentation with RAG by integrating external custom data from a variety of sources, allowing chat with such documents
MIT License
16 stars 6 forks source link

question #1

Closed kalle07 closed 3 months ago

kalle07 commented 8 months ago

hey,

you have no discussion are so here ;)

what is better in RAG than "pdfgear" https://www.pdfgear.com/de/ its 300MB (offline) and better than all i tryed before ...

how you prevent in your software that the model is halucinating an tell somwhat that the model knows but its not based on the document, i allways had that impression in privategpt in gpt4all and khoj-ai ! PDFgear says "no i dont found it" or "you are shure you talk about that document" ...

btw, you have a GUI or its possible as an extention in oobabooga - text-generation-webui ?

J4NN0 commented 8 months ago

HI @kalle07,

how you prevent in your software that the model is halucinating an tell somwhat that the model knows but its not based on the document

I'm not sure if you could have an LLM model that doesn't have hallucinations. But what you can do is for sure reduce the hallucination. You can achieve this by fine-tuning the pre-trained model using domain-specific data, or techniques like "chain-of-thought prompting" and so on.

btw, you have a GUI or its possible as an extention in oobabooga - text-generation-webui ?

Not at the moment. If I ever implement one you will find it here.