Open Opdoop opened 3 years ago
Hi, thank you for asking the question. We are glad that you enjoyed the paper.
You asked a very good question about 'Adv→Ori'. One thing to correct is that Adv→Ori does not mean the "generalization gap," -- it is more just about the accuracy on the original in-distribution data. Although we did not check the Adv→Ori results in this paper, from my general knowledge of adversarial attack papers, usually Adv→Ori is almost the same as Ori→Ori, in some cases even slightly worse due to distribution difference between Adv and Ori.
I will involve my co-first author Xiaoyu @XINGXIAOYU here. If you urgently need the experimental results on Adv→Ori, maybe we can see if she could offer some help on it.
@zhijing-jin Hi Jin, would you please here give more details on how the (Adv->N) part comes to play?
It writes: trained on the Adversarial data and evaluated on the New test set
.
So, what is the Adversarial data
? (Is it still the original test set?)
And what are the parties in the Adversarial training?
Thank u.
Great work and solid experiment 🎉 In the paper table 10b, column 'Adv→N' shows that adversarial training could improve the model performance in the ARTS test dataset. My question is have you test the effect of adversarial training for the original test dataset e.g 'Adv→Ori‘. So that we could see after training on hard examples, how the generalization gap changes. And whether there is a tradeoff between robustness with generalization in ABSA task.