Closed kalle07 closed 3 months ago
HI @kalle07,
how you prevent in your software that the model is halucinating an tell somwhat that the model knows but its not based on the document
I'm not sure if you could have an LLM model that doesn't have hallucinations. But what you can do is for sure reduce the hallucination. You can achieve this by fine-tuning the pre-trained model using domain-specific data, or techniques like "chain-of-thought prompting" and so on.
btw, you have a GUI or its possible as an extention in oobabooga - text-generation-webui ?
Not at the moment. If I ever implement one you will find it here.
hey,
you have no discussion are so here ;)
what is better in RAG than "pdfgear" https://www.pdfgear.com/de/ its 300MB (offline) and better than all i tryed before ...
how you prevent in your software that the model is halucinating an tell somwhat that the model knows but its not based on the document, i allways had that impression in privategpt in gpt4all and khoj-ai ! PDFgear says "no i dont found it" or "you are shure you talk about that document" ...
btw, you have a GUI or its possible as an extention in oobabooga - text-generation-webui ?