PaddlePaddle / Knover

Large-scale open domain KNOwledge grounded conVERsation system based on PaddlePaddle
Apache License 2.0
673 stars 131 forks source link

Release of training code for QKConv #179

Open robinsongh381 opened 1 year ago

robinsongh381 commented 1 year ago

@christineaa Thanks for sharing nice work !

Do you have any plans to release the training code ?

christineaa commented 1 year ago

Thanks for your attention to this work! We have no plan to release the training code.

robinsongh381 commented 1 year ago

@christineaa I have a one more question regarding the experiment in the QKConv paper. When you train and evaluate for QReCC task, did you use the 14K conversations as the training dataset or 80K question-answer pairs as the training dataset ?

And similarly for the test dataset - did you use the conversation level or question-answer level?

Thank you!

christineaa commented 1 year ago

We used the question-answer pairs as the training/dev/test dataset, with 60.4K, 3.1K, and 16.4K samples respectively.

dhx20150812 commented 1 year ago

Hi, @christineaa . Thanks for your nice work.

I have one more question: how should I build the BM25 index for QRecc task. I notice you post a link to the ml-qrecc repo. Whether should I download the webpages from both the Common Crawl and the Wayback Machine and build the BM25 index?

christineaa commented 1 year ago

Thanks for your attention to this work! Yes, you should download both web pages and follow the instructions in the ml-qrecc repo.

dhx20150812 commented 1 year ago

Thanks for your reply.

robinsongh381 commented 10 months ago

@christineaa I have further questions regarding training and evaluation of QKConv model on QReCC dataset.

  1. In your paper, the third footprint states "We remove conversations without truth responses.". What is the meaning of this ? Did you apply this for training or test dataset or both ? Please provide me a detailed code for this processing if available. Because in your QKConv inference or dataset code, I cannot find any relevant information.

  2. Upon evaluation for Table 2 in QKConv paper for QReCC, did you apply the above "removed" version of qrecc-test.json ? or plain qrecc-test.json ? I mean qrecc-test.json from here.

robinsongh381 commented 10 months ago

Also, when you report Table2, did you exlcude test examples which do not have gold knowledge ?

christineaa commented 10 months ago

@robinsongh381 Thanks for your attention to this work!

  1. We remove the samples when "Truth_answer"/"Answer" is an empty string for the training set (57946 samples left) and the test set (15024 samples left).
  2. We use the "removed" version of the test set, coding as evaluation code.
  3. We include samples without golden knowledge in Table 2 as the absence of golden knowledge does not affect response generation evaluation, and only exclude them in the knowledge selection evaluation.
robinsongh381 commented 10 months ago

@christineaa Thank you for kind response.

I have a further question for Question 3.

The absence of gold knowledge indicates that the essential and required piece of information does not exist within the knowledge pool and hence factually correct and knowledg-grounded response cannot be obtained.

For this reason, I have found that previous works on QReCC evaluation, such as DPR-IHN[1], and CONQRR[2], have excluded such cases (i.e., examples without gold-knowledge annotation) in their evaluation.

What is your opinion on this ?

Thank you

[1] Saving Dense Retriever from Shortcut Dependency in Conversational Search

[2] CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

christineaa commented 10 months ago

@robinsongh381 The knowledge base for QReCC contains 54M passages, and more or less, there is knowledge relevant to the questions. We demonstrate how the model utilizes incorrect retrieved knowledge in Table 5 & Table 6.

However, DPR-IHN and CONQRR excluding samples without golden knowledge are another case. They present knowledge selection Recall metrics as their main results, and Recall metrics cannot be applied without golden knowledge.