While retrieving the relevant documents from the database, we get a similarity score for each chunk of data.
If the similarity score is less than a threshold (which in this case is 0.4), we call the API with a different prompt that is meant for rechecking if the response given by the first call is taken from the retrieved documents or not
New functions for answer verification:
RAG.py: Added the recheck_prompt function to create a prompt template for verifying if an answer is fully supported by the given context.
RAG.py: Added the is_answer_relevant function to call the Mistral LLM and check if the answer is based on the provided context.
Enhancements to query_rag function:
RAG.py: Added retrieval of the score metadata from the most relevant document.
RAG.py: Integrated the is_answer_relevant function to verify the generated answer if the score is less than or equal to 0.4 and updated the response accordingly if the answer is deemed irrelevant.
While retrieving the relevant documents from the database, we get a similarity score for each chunk of data. If the similarity score is less than a threshold (which in this case is 0.4), we call the API with a different prompt that is meant for rechecking if the response given by the first call is taken from the retrieved documents or not
New functions for answer verification:
RAG.py
: Added therecheck_prompt
function to create a prompt template for verifying if an answer is fully supported by the given context.RAG.py
: Added theis_answer_relevant
function to call the Mistral LLM and check if the answer is based on the provided context.Enhancements to
query_rag
function:RAG.py
: Added retrieval of thescore
metadata from the most relevant document.RAG.py
: Integrated theis_answer_relevant
function to verify the generated answer if the score is less than or equal to 0.4 and updated the response accordingly if the answer is deemed irrelevant.