xuanqing94 / RobustNet

Robust neural network
MIT License
7 stars 4 forks source link

Comparison with adversarial training #2

Open ScottLiao920 opened 4 years ago

ScottLiao920 commented 4 years ago

Hi xuanqing, I just read you paper "Towards Robust Neural Networks via Random Self-ensemble" and I do think RSE is a brilliant idea. I have one simple question about your experiement setting. You mentioned adversarial training using PGD adversary in Table.1 but most of your comparison is on FGSM adversary. As Carlini & Wagner pointed out in their paper about C&W attack (https://arxiv.org/abs/1608.04644), adversarial training with FGSM adversary is not usefull at all. May I know which adversary you used in those supplementary figures and possibly any data compared to PGD adversary?

xuanqing94 commented 4 years ago

@ScottLiao920

You are right. Our method appeared slightly later than PGD. Back at that time, the most popular evaluation method was C&W with L2-norm. So we decided to ignore PGD Adv-training (which is usually evaluated under L-inf).

There was indeed "adversarial training method" in my paper, but as you know it wasn't Madry's.

Our defense method is weaker than PGD adv training. But considering the training speed, I would regard it as a trade-off. This is also stated in the updated version of our paper.

Overall, training with noisy data is just not so effective as the worst-case data right now.

Another interesting paper regarding to this is the verified robustness, where you can check this paper: https://arxiv.org/pdf/1906.04584.pdf . It shows randomness training is still beneficial wrt robust accuracy.

ScottLiao920 commented 4 years ago

Thanks xuanqing!