Closed why1988seu closed 5 years ago
There is no record about Gaussian augmentation and label smoothing being used together previously, so I don't really know if this should work. Gaussian noise can degrade performance if you introduce too much of it. What is the value of the sigma
parameter you use for Gaussian noise? I support your cifar-10 data is normalized between 0 and 1? Also, when you say you do Gaussian augmentation and label smoothing on the adversarial training example, does this mean that you also add adversarial samples to your procedure?
(1) sigma is 1(default value, I don't change). (2) cifar-10 data use the "load_dataset(str('cifar10'))" (example code, I don't change). (x_train, y_train), (x_test, ytest), min, max_ = load_dataset(str('cifar10')) (3) I use the Gaussian augmentation or the label smoothing on the x_train dataset and train the model. Then I use the FGSM or DeepFool method to produce the x_test_adv dataset, and test the accuracy.
@why1988seu, thanks for the additional info! When loading cifar-10, the data will be normalized between 0 and 1. In that case, a standard deviation sigma
of 1 is a pretty high value. That can explain the results you obtained. I recommend starting with something like sigma=0.1
or sigma=0.3
. I can also point you to some of our previous work, where you'll find some numerical results for Gaussian augmentation and label smoothing on cifar-10 (https://arxiv.org/pdf/1707.06728).
Thank you very much! How can I tune the hyperparameters such as 'sigma'? There are many hyperparametes in the attack or defences methods.
Yes, the attacks and defences have quite a number of parameters, some of which influence the results significantly. For sigma
, the most important information is probably the data range. I would encourage using a value representing a fraction of the data range. You would still have to try out a few values to find the best one and make sure it does not degrade performance on clean samples.
Thank you very much! I have read your paper and it's very useful. Is there any other papers which talk about numerical results for various algorithms?
I use the gaussian augmentation and label smoothing method on the cifar-10 adversarial training example. But the accuracy of defences methods is lower than the clean train dataset. Is this normal?