-
When running `python Setups/Attacks.py --dataset=cifar10 --attack=boby --eps=0.1 --test_size=1 --Adversary_trained=False`
then in Setups/Attacks.py line 338 the variables `lab_` and `tar_` have not b…
-
A command to save all transformed query attacks with their details such as the model output confidence is needed for people who work on attack detection systems especially for the black-box attacks in…
-
**Chapter 17 - Robust AI**
- First and foremost, this chapter was incredibly long -- nearly double the size of some of the other lengthier chapters in this book. It was so much material that it was…
-
> The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we argue that the field of medicine may be uniquely susceptible to a…
-
### ❔ Any questions
Hi, I need to use black box attack model to test, the actual scenario I don't know the model used by the other party, I will only receive feedback from the other party's model, bu…
-
This is to discuss outstanding issues for Tip 10: Don’t share models trained on sensitive data.
https://github.com/Benjamin-Lee/deep-rules/blob/master/content/12.privacy.md
-
In comments, we can describe the initial kind of attacks we want to include in the tool.
-
论文“Model Inversion Attacks against Graph Neural Networks”,有看到这篇论文提出了黑盒场景下的零阶梯度估计和基于强化学习的图模型反演攻击,想在基于zaixi哥的工作之上,进一步进行研究,希望能共享一下这两个场景下的代码,感谢zaixi哥!
-
Hi, authors. Thank you for your efforts for safe AIGC and your creative work. But I have some issues with the experiments in the paper.
1. Do the experiments for attacking open-source models (Sec. …
-
Let's collect the used sources for the paper here! HALP WANTED!