Open robinsongh381 opened 1 year ago
Thanks for your attention to this work! We have no plan to release the training code.
@christineaa I have a one more question regarding the experiment in the QKConv paper. When you train and evaluate for QReCC task, did you use the 14K conversations as the training dataset or 80K question-answer pairs as the training dataset ?
And similarly for the test dataset - did you use the conversation level or question-answer level?
Thank you!
We used the question-answer pairs as the training/dev/test dataset, with 60.4K, 3.1K, and 16.4K samples respectively.
Hi, @christineaa . Thanks for your nice work.
I have one more question: how should I build the BM25 index for QRecc task. I notice you post a link to the ml-qrecc repo. Whether should I download the webpages from both the Common Crawl and the Wayback Machine and build the BM25 index?
Thanks for your attention to this work! Yes, you should download both web pages and follow the instructions in the ml-qrecc repo.
Thanks for your reply.
@christineaa I have further questions regarding training and evaluation of QKConv model on QReCC dataset.
In your paper, the third footprint states "We remove conversations without truth responses.". What is the meaning of this ? Did you apply this for training or test dataset or both ? Please provide me a detailed code for this processing if available. Because in your QKConv inference or dataset code, I cannot find any relevant information.
Upon evaluation for Table 2 in QKConv paper for QReCC, did you apply the above "removed" version of qrecc-test.json ? or plain qrecc-test.json ? I mean qrecc-test.json from here.
Also, when you report Table2, did you exlcude test examples which do not have gold knowledge ?
@robinsongh381 Thanks for your attention to this work!
@christineaa Thank you for kind response.
I have a further question for Question 3.
The absence of gold knowledge indicates that the essential and required piece of information does not exist within the knowledge pool and hence factually correct and knowledg-grounded response cannot be obtained.
For this reason, I have found that previous works on QReCC evaluation, such as DPR-IHN[1], and CONQRR[2], have excluded such cases (i.e., examples without gold-knowledge annotation) in their evaluation.
What is your opinion on this ?
Thank you
[1] Saving Dense Retriever from Shortcut Dependency in Conversational Search
[2] CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning
@robinsongh381 The knowledge base for QReCC contains 54M passages, and more or less, there is knowledge relevant to the questions. We demonstrate how the model utilizes incorrect retrieved knowledge in Table 5 & Table 6.
However, DPR-IHN and CONQRR excluding samples without golden knowledge are another case. They present knowledge selection Recall metrics as their main results, and Recall metrics cannot be applied without golden knowledge.
@christineaa Thanks for sharing nice work !
Do you have any plans to release the training code ?