I noticed that though the author describes the implementation of RAG-Fusion using also original query for retrieval stage here and here, none of the implementations (from the author, this webinar and langchain template) contain original query.
Could you please tell have you experimented with adding of the original query with more weight (as author describes in his article)?
Do I understand correctly that then we will assume the score=1 for all chunks retrieved for original query before we apply RRF to the ranked documents?
Or maybe the following approach will be more robust and consistent:
After retrieval stage (including original query), for each query rerank all the results using reranker
Apply RRF to reranked documents
With this approach it seems that the model will choose if the results are relevant to the specific query (using reranker), including the original query.
Hi @rlancemartin,
I noticed that though the author describes the implementation of RAG-Fusion using also original query for retrieval stage here and here, none of the implementations (from the author, this webinar and langchain template) contain original query.
Could you please tell have you experimented with adding of the original query with more weight (as author describes in his article)?
Do I understand correctly that then we will assume the score=1 for all chunks retrieved for original query before we apply RRF to the ranked documents?
Or maybe the following approach will be more robust and consistent:
With this approach it seems that the model will choose if the results are relevant to the specific query (using reranker), including the original query.
It will be great to hear you opinion about that.
Thank you.