cdqa-suite / cdQA

⛔ [NOT MAINTAINED] An End-To-End Closed Domain Question Answering System.
https://cdqa-suite.github.io/cdQA-website/
Apache License 2.0
615 stars 191 forks source link

Discussion around BERTserini paper #31

Closed fmikaelian closed 5 years ago

fmikaelian commented 5 years ago

See: https://export.arxiv.org/pdf/1902.01718

fmikaelian commented 5 years ago

Here are my takeways:

They modified BERT to compare predictions in a meaningful way (see #36):

to allow comparison and aggregation of results from different segments, we remove the final softmaxlayer over different answer spans.

But this is to be clarified.

andrelmfarias commented 5 years ago

What I understand from the quote below is that they only use the logits (score) for comparison between answers spans, instead of using the probabilities after applying the softmax function.

to allow comparison and aggregation of results from different segments, we remove the final softmaxlayer over different answer spans.

@fmikaelian what do you think?

fmikaelian commented 5 years ago

Yes

It would be useful to cross check with the paper of danqi chen where she mentions something about it with their DrQA app https://cs.stanford.edu/~danqi/papers/thesis.pdf

Also we should follow this thread: https://github.com/huggingface/pytorch-pretrained-BERT/issues/360

fmikaelian commented 5 years ago

In section 5.2.3 of Danqi Chen's thesis:

We apply our trained DOCUMENT READER for each single paragraph that appears inthe top 5 Wikipedia articles and it predicts an answer span with a confidence score. To make scores compatible across paragraphs in one or several retrieved documents, we use the unnormalized exponential and take argmax over all considered paragraph spans for our final prediction. This is just a very simple heuristic and there are better ways to aggregate evidence over different paragraphs

This part seems to be implemented here:

https://github.com/facebookresearch/DrQA/blob/d27180fc527084263ca0e43091f5d35c4bbd4963/drqa/reader/layers.py#L243

And here:

https://github.com/facebookresearch/DrQA/blob/1f811ded549a69f8b5ea303fb6f6d35ad6fc84ae/drqa/pipeline/drqa.py#L113

Our predict() function does not currently returns this confidence score. How can we get it in our setup and modify it for comparision?