Closed s-huu closed 4 years ago
Hi, we used the L2 DeepFool attack implemented in the Foolbox library.
Got it. Thanks.
Hi, I am reopening this issue to ask more about the undefended model: when you say "... trained a neural network with the base classifier's architecture on clean data, and subjected it to a DeepFool l2 adversarial attack, in order to obtain an empirical upper bound ...", do you mean that instead of training on noisy images and using clean images in certify.py, you are training on clean images and using DeepFool adversarial images in certify.py?
So, I didn't attack a smoothed classifier using DeepFool, I attacked an undefended network (of the same architecture that I was using as the base classifier). The point of the figure in the paper was to illustrate that a smoothed network is more robust than an undefended classifier with the base classifier's architecture. So, to answer your question, I didn't use certify.py at all, since I was attacking a normal neural network.
Hi, Is it possible to certify an undefended network using certify.py (and compare robustness of two different undefended networks)? When I try to use certify.py on an undefended network (trained by myself), I get same incorrect label prediction for all examples as I see in my log returned using certify.py. Without using the Smooth Class, my model returns correct original accuracy, so I believe all the data loading etc. are setup correctly. I have attached log for my model.
Hi,
In Figure 6 of the paper, you mentioned the dashed line " is an upper bound on the empirical robust accuracy of an undefended classifier with the base classifier’s architecture." I'm curious about what method is used to measure the robust accuracy here since there are a lot of empirical attacks. Thank you.