Closed MaxCorbel closed 4 years ago
Solution There is no problem with the updated metrics.py file. If you are getting an index out of bounds error it is due to the lines at the bottom of craft_adversarial_examples.py:
data_bs = data_bs[:10] data_bs = data_bs[:10]
The [:10] makes it so that craft_adversarial_examples.py only creates adversarial examples for the first 10 pictures only. So when you run eval_model.py it is expecting 10,000 adversarial examples but there are only 10.
To fix this get rid of the [:10] so that it creates an adversarial example for all 10,000 pictures. To reduce runtime, create a subset using subsample.py and store it somewhere. Then load your new samples and labels into craft_adversarial_examples.py.
I have tried generating subsamples but the run time is still slow. Did you change the num_classes to a number that's lower?
No num_classes has to stay the same I believe because there are 10 different digits. I am also having an issue with the runtime still when creating a subsample of 1,000. I am thinking I will have to lower the ratio to 0.05 in order to create a subsample of 500 as the only attacks that I can run do not actually affect any of the models including the undefended model.
Description When running eval_model.py on adversarial examples not already included but rather generated by craft_adversarial_examples.py, there is an IndexError in metrics.py. This error does not occur with adversarial examples included in data.
To Reproduce Steps to reproduce the behavior:
Expected behavior The expected result is for eval_model.py to print:
"Evaluations on [../../[Data Directory]/[Adversarial Example File]]: {'UM': 0.8842541157458843, 'Ensemble': 0.6725583274416725, 'PGD-ADT': 0.21331178668821332}".
The actual print from eval_model.py when loading the given default files is:
"Evaluations on [../../data/test_AE-mnist-cnn-clean-fgsm_eps0.3.npy]: {'UM': 0.8842541157458843, 'Ensemble': 0.6725583274416725, 'PGD-ADT': 0.21331178668821332}"
Updates Update 1: After removing the changes made to metrics.py yesterday, the eval_model.py file does not have any errors. However, the evaluation is wrong and it prints:
"Evaluations on [../../task1_data/fgsm0.1.npy]: {'UM': -0.0, 'Ensemble': -0.0, 'PGD-ADT': -0.0}".
(task1_data is the directory that the generated adversarial example files are stored in and fgsm0.1.npy is the name of the generated adversarial example file.)
Edits Edit 1: There also seems to be an error when loading the athena-mnist.json and model-mnist.json files from experiment instead of demo:
"TypeError: expected str, bytes or os.PathLike object, not NoneType"
Edit 2: The reason for the above TypeError is that in experiment/model-mnist.json does not have key 'pgd-trained' and therefore when running eval_model.py using the version of model-mnist.json from experiment, 'pgd-trained' is None.