Closed sudarshan1994 closed 6 years ago
It's under the section called "AddSent and AddOneSent" under "Main Experiments". The direct links to the datasets are: AddSent AddOneSent
@robinjia to reproduce the AddOneSent results, do we need to consider the question-answer pairs with -high-conf-turk
in their ID, or all of them?
Hi,
You should consider all of them. In particular, for each original example, the evaluation returns the worse out of the model's accuracy on the original, and on the example with one added sentence. Please see the evaluation script for more details
I am not able to find the adversarial dataset in the Codalabs link, could you guys please point me to the location of the adversarial dataset in the codalabs page?