-
同学你好,非常感谢你对这一系列论文的整理和梳理,真的帮助很大!在阅读文献时注意到,仓库中部分标注为“2024-NeurIPS”的论文是“2023-NeurIPS”。以下是我发现的相关论文列表,供参考:
2023-NeurIPS:[Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularizatio…
-
Hello, I am very interested in the experiments of adversarial attacks against pixel diffusion models in the paper. Will the codes be released?
-
Hello, I am currently reproducing your paper. Regarding Figure 4, I have some questions that I would like to ask you:
1. Is the dataset Imagene-compatible?
2. Besides DiffAttack, which specific surr…
-
Hi Indu,
Thank you for your wonderful work! This work is quite interesting to me and I think the results are amazing. However, I was confused when I tried applying this method to my own dataset. I …
-
## 論文タイトル(原文まま)
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
## 一言でいうと
ディフュージョンモデルにおける安全な概念消去(nudityや特定オブジェクトなど)を強化するために、対抗学習(adversarial training…
-
Hello, I'm really interested in your work! However, I have some questions about the adversarial attack with text perturbation. In Table 5, the adversarial attack with only perturbation on the text cou…
-
Hello,
I am interested in using your DiffAttack on 1D sequences with the aim to make them adversarial against 1D Neural Net classifier (for the specific type of sequences ). I Have a few questions …
-
Hi Chen, great work for adversarial attack using diffusion models, I am trying to run your code but getting the following errors:
python main.py --model_name "inception" --save_dir output --images_…
-
Thank you for your awesome work.
What should the `placeholder_token` be for the i2p experiment?
Currently, it's ```--placeholder_token="" --initializer_token="art"```, but I'm asking if this is c…
-
The inception model I reproduced couldn't do what you did. We usually have 229✖229 as input for that model, here it is 224✖224. does this have any effect please? Looking forward to your reply.