carlini / nn_robust_attacks

Robust evasion attacks against neural network to find adversarial examples
BSD 2-Clause "Simplified" License
778 stars 229 forks source link

Low validation accuracy of CIFAR #37

Open HaiQW opened 4 years ago

HaiQW commented 4 years ago

I ran the script to train the model on CIFAR10 and also the L0 attack on the trained model.

However, the validation accuracy achieved by the script is very low. It is not reasonable to perform adversarial attacks on such low accuracy model.

carlini commented 3 years ago

Well, responding a year late is better than not. In the chance you see this: what accuracy do you get? I think I got ~80% accuracy on this.

HaiQW commented 3 years ago

Well, responding a year late is better than not. In the chance you see this: what accuracy do you get? I think I got ~80% accuracy on this.

thx