-
- Reorg/rename/cleanup examples (just to make it easier to navigate & read)
- API documentation
- Clean up docstrings, write/update usage docs
- Publish to readthedocs?
- Conceptual documentatio…
-
https://github.com/MadryLab/mnist_challenge
Hello, I am trying to attack MadryLab's defense strategy with your project (advGAN). However, it did not get the 92.76% effect mentioned in the article (…
-
*The following peer review was solicited as part of the Distill review process.*
***The reviewer chose to keep keep anonymity.** Distill offers reviewers a choice between anonymous review and offer…
colah updated
6 years ago
-
Hi team,
As per the README, magika is open to adversarial examples from the community, here's one: https://gist.github.com/s0md3v/747b815cddcb2c9c4c7d0232bcc676ec.
It's a powershell script that …
-
The inception model I reproduced couldn't do what you did. We usually have 229✖229 as input for that model, here it is 224✖224. does this have any effect please? Looking forward to your reply.
-
This seems like a very important finding mentioned in your [blog](https://huggingface.co/blog/leaderboard-decodingtrust) and something deserving of further exposition.
Submitting your paper to Gemi…
-
When I run the program, I don't understand the program how to produce adversarial examples, and may i have your wechat number, thank you very much
-
Dear Sir,
I just took my first step in scientific research focused on the algorithms of defense adversarial samples. Recently I read your paper 《Adversarial and Clean Data Are Not Twins》 .I think i…
-
## 一言でいうと
画像に微細な変更を加えることで誤認識をさせる試みがあるが、実世界での運用上は心配しなくてもいいのではという研究。誤検知を誘発する変更は画像をとった距離/角度に固有のもので、少しそれらが変わると正しく認識されるとのこと。自動運転などでは対象画像までの距離/角度はすぐに変わるのでOKという
![image](https://user-images.githubuserc…
-
- https://pyimagesearch.com/2021/03/01/adversarial-attacks-with-fgsm-fast-gradient-sign-method/
- https://github.com/MadryLab/mnist_challenge
- https://pytorch.org/tutorials/beginner/fgsm_tutorial.h…