Closed joelma1 closed 4 years ago
Hi @joelma1 Thank you very much for raising this issue and describing your observations.
Would you be able to provide more details about your experiments like model architecture, model training, dataset, framework (TensorFlow?), ART version, etc.?
Would you be able to share a script or notebook that runs your experiment for us to reproduce?
I have been applying evasion adversarial attacks on medical datasets, and noticed that after attack strength (epsilon) of FGSM, PGD, BIM increases above a certain point, classifier accuracy starts increasing instead of decreasing.
Is there an underlying reason for this behavior?
Example plot: