[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
82
stars
7
forks
source link
How are A1,A2, A3 scores calculated from r1,r2,r3 test results? #2
The results of evaluation show here in the README file vs the ones in the paper are based on different metrics. The paper shows A1,A2,A3 scores along with accuracy on adv-MNLi, adv-SNLi etc, whereas running the evaluation here shows us accuracy roundwise of both tests and dev data. Is there any formula to calculate the A1,A2,A3 scores from the test scores mentioned in the repo?
The results of evaluation show here in the README file vs the ones in the paper are based on different metrics. The paper shows A1,A2,A3 scores along with accuracy on adv-MNLi, adv-SNLi etc, whereas running the evaluation here shows us accuracy roundwise of both tests and dev data. Is there any formula to calculate the A1,A2,A3 scores from the test scores mentioned in the repo?
Thanks in advance for the help :)