Open wielandbrendel opened 7 years ago
@wielandbrendel Thank you for your interests in our algorithm! Yes I will be glad to implement it into Foolbox. To start with, I have two simple questions and hope you can answer:
Thank you!
@huanzhang12 Thanks for your response!
Carlini and Wagner: I started to implement that attack just today. Not sure yet how long that will take though. I can let you know once the implementation is finished.
Foolbox is not focused on white-box attacks. In fact, I believe that blackbox attacks will become much more important simply because whitebox attacks are so easy to disarm. We implemented several blackbox attacks (mostly simple noise attacks though) and the only structural difference between whitebox and blackbox attacks in Foolbox is that the latter do not call the gradient function of the model ;-). Take a look at the super-simple ContrastAttack to get a feeling.
Looking forward!
@wielandbrendel Thanks for providing those details! Let me know when you finish the implementation of Carlini and Wagner's attack. It shouldn't be hard to extend it to my attack. Thanks!
It's been some time but the Foolbox finally contains a (very beautiful) implementation of the Carlini & Wagner attack: https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/carlini_wagner.py
We would still be interested to have your ZOO attack as part of Foolbox! Please let me know if we can help.
I am the co-author of Foolbox and coordinator of the Robust Vision Benchmark. The results you report are amazing, in particularly so for a blackbox attack. I'd like to encourage you to implement your algorithm into Foolbox, a Python package that we recently released to ease the use of available adversarial attacks. Foolbox already implements many common attacks, has a simple API, is usable across a range of deeplearning packages (e.g. Tensorflow, Theano, PyTorch, etc.) and has already attracted significant adoption in the community. Your attack method could find much wider adoption if it is made available in Foolbox.
Second, your attack method might also be an interesting contender for the Robust Vision Benchmark in which we pitch DNNs against adversarial attacks. We started this benchmark a couple of days ago and you could make this an awesome public showcase for the performance of your algorithm. In particular, your attack would be tested against many different network models (current and future) without your intervention.
I'll be happy to help with technical questions. If you'd implement ZOO in Foolbox then I could automatically include your algorithm in our benchmark without any additional intervention from your side (i.e. you would not need to prepare and submit a Docker container with the attack). Let me know if that sounds interesting to you.