Closed FFYYang closed 4 years ago
There are only 4 tasks' hyper parameters in this file, would you please release others?
Do you have any comments for the scale of norm( $\epsilon$-ball )? Is it directly to the adversarial effect or general capability?
@PantherYan In my opinion, the selection of epsilon is tricky and depend on your task's dataset, large epsilon may lead the generated adversarial example to change the golden label, while the small epsilon cannot threaten the model. I was curious:
@YasinQiu Thanks for your reply.
I will read more literature to answer our confusion question.
Before yesterday. I training my implemented of freeLB in a plugin format without dropout mask.https://github.com/zhuchen03/FreeLB/issues/8#issuecomment-627669810. It works well with a setting of hypermeter. But after I added the mask_drooout implement and I changed to another set of hyper meter, the FreeLB AT goes to the wrong way. Accuracy falls with training.
It confused me a lot. I will figure out why and post it out.
@YasinQiu The \epsilon is small. There are a lot of papers to explain how to choose the minimum or different scales of \epsilon. Here are for your reference.
To the explicit value. Should be around 1e-1?
I have added the hyperparameters for 8 of the GLUE tasks in the bash script.
For epsilon, in the current setting, you can set it to 0 first, which will put no restriction on the maximum norm, and tune other hyperparameters. In this way, the maximum norm will be restricted by the ascent step size, number of ascent steps and the initialization.
In the context of security, epsilon restricts the strength of the adversary for better comparisons. However, in our case, you should first observe the norm of the embeddings to choose a strength/epsilon that is not ignorable but also won't outweigh the embeddings.
@zhuchen03 @PantherYan thx ~!!!
There are only 4 tasks' hyper parameters in this file, would you please release others?