robinjia / adversarial-squad

Code from Jia and Liang, "Adversarial Examples for Evaluating Reading Comprehension Systems" (EMNLP 2017)
MIT License
120 stars 26 forks source link

Downloading the adversarial dataset #3

Closed sudarshan1994 closed 6 years ago

sudarshan1994 commented 6 years ago

I am not able to find the adversarial dataset in the Codalabs link, could you guys please point me to the location of the adversarial dataset in the codalabs page?

robinjia commented 6 years ago

It's under the section called "AddSent and AddOneSent" under "Main Experiments". The direct links to the datasets are: AddSent AddOneSent

pminervini commented 5 years ago

@robinjia to reproduce the AddOneSent results, do we need to consider the question-answer pairs with -high-conf-turk in their ID, or all of them?

robinjia commented 5 years ago

Hi,

You should consider all of them. In particular, for each original example, the evaluation returns the worse out of the model's accuracy on the original, and on the example with one added sentence. Please see the evaluation script for more details