-
Hi. While running your framework, we encountered a problem with the lack of a target model for the plgmi.py attack. On Google disk there are only discriminators and generators, which do not fit the ro…
-
你好,我在尝试运行您的代码的时候发现一些文件是缺失的,导致代码难以运行,例如在/Model-Inversion-Attack-ToolBox-main/examples/standard/attacks/lokt.py文件中代码相关的路径“checkpoints_v2/attacks/lokt/lokt_celeba64_densenet169_celeba64_ir152.pth”不存在下载你们…
-
In the second paragraph of Federated Learning, I would emphasise that the gradient leakage is one example of a vulnerability. Maybe briefly also mention the others and that there might be still unknow…
-
论文“Model Inversion Attacks against Graph Neural Networks”,有看到这篇论文提出了黑盒场景下的零阶梯度估计和基于强化学习的图模型反演攻击,想在基于zaixi哥的工作之上,进一步进行研究,希望能共享一下这两个场景下的代码,感谢zaixi哥!
-
Thank you for your awesome work.
What should the `placeholder_token` be for the i2p experiment?
Currently, it's ```--placeholder_token="" --initializer_token="art"```, but I'm asking if this is c…
-
Security of AI agents in a broad aspect
CoreLocker and MInference are quite interesting. But how can I think of a topic with three objectives that can cover all of this stuff?
- obj1: explore thre…
-
As a security researcher, I would like to open a discussion to make Machine Learning practices secure over ubiquity. We will discuss "not so commonly known" vulnerabilities in machine learning applica…
-
**Desired solution**
TODO:
- [x] #89
- [x] #60
- [x] Key points need to be added #58
- [x] Questions still need to be done #58
- [x] "Only when all parties work together, the original value can be r…
-
reviews
-
# Thanks for your valuable comments.
We thank AC and all reviewers for their valuable time and constructive feedback. We briefly summarize the suggestions and our answers as the following. You can …