Open arun-gupta opened 5 days ago
This may be related to https://github.com/opea-project/GenAIExamples/issues/1005
Hi @arun-gupta!
By any chance, did you also provide input data for the RAG vector database that includes information about OPEA (e.g. the OPEA main README on github)?
LLM hallucination is usually observed with respect to its training and input data (for RAG). The default model for this example is Intel/neural-chat-7b-v3-3
which wouldn't have had any info of OPEA in its training data given that it was released almost a year ago and OPEA is only ~6 months old. Thus, if no raw data source
is provided in conjunction with the input query what is opea?
, then, it is my understanding that we should expect the model to return a nonsensical/incorrect response.
The first response in the image is before providing the link, the second response is after providing the link to https://opea.dev.
Hi, I have tried to reproduce the issue, but I get the correct result both with build-from-source images and images pulled from docker hub. Would you please try to build the image from the latest code and test again? We will also update the images in docker hub soon.
Priority
Undecided
OS type
Ubuntu
Hardware type
Xeon-GNR
Installation method
Deploy method
Running nodes
Single Node
What's the version?
latest
Description
Deployed ChatQnA on AWS cloud, the model is hallucinating, see the response
Reproduce steps
https://opea-project.github.io/latest/getting-started/README.html
Raw log
No response