-
你好我在使用你们提供的被攻击模型和我自己使用celeba64.py训练出来的模型(测试精度为93.298)作为lommagmi.py的被攻击模型时。攻击效果top1为0.3-0.4之间、top5在0.5左右.与论文《Re-thinking Model Inversion Attacks Against Deep Neural Networks》展现的实验结果相差比较大。我想了解一下产生这样的结果是…
-
In the second paragraph of Federated Learning, I would emphasise that the gradient leakage is one example of a vulnerability. Maybe briefly also mention the others and that there might be still unknow…
-
Hi. While running your framework, we encountered a problem with the lack of a target model for the plgmi.py attack. On Google disk there are only discriminators and generators, which do not fit the ro…
-
论文“Model Inversion Attacks against Graph Neural Networks”,有看到这篇论文提出了黑盒场景下的零阶梯度估计和基于强化学习的图模型反演攻击,想在基于zaixi哥的工作之上,进一步进行研究,希望能共享一下这两个场景下的代码,感谢zaixi哥!
-
Thank you for your awesome work.
What should the `placeholder_token` be for the i2p experiment?
Currently, it's ```--placeholder_token="" --initializer_token="art"```, but I'm asking if this is c…
-
Security of AI agents in a broad aspect
CoreLocker and MInference are quite interesting. But how can I think of a topic with three objectives that can cover all of this stuff?
- obj1: explore thre…
-
As a security researcher, I would like to open a discussion to make Machine Learning practices secure over ubiquity. We will discuss "not so commonly known" vulnerabilities in machine learning applica…
-
# Thanks for your valuable comments.
We thank AC and all reviewers for their valuable time and constructive feedback. We briefly summarize the suggestions and our answers as the following. You can …
-
reviews
-
the following is an initial review taken from Slack logs: https://owasp.slack.com/archives/C04PESBUWRZ/p1677192099712519
by @robvanderveer
---
Dear all,
I did a first scan through the list t…