-
Hello, I am very interested in the experiments of adversarial attacks against pixel diffusion models in the paper. Will the codes be released?
-
Hi Indu,
Thank you for your wonderful work! This work is quite interesting to me and I think the results are amazing. However, I was confused when I tried applying this method to my own dataset. I …
-
Hello, I am currently reproducing your paper. Regarding Figure 4, I have some questions that I would like to ask you:
1. Is the dataset Imagene-compatible?
2. Besides DiffAttack, which specific surr…
-
Hello, I'm really interested in your work! However, I have some questions about the adversarial attack with text perturbation. In Table 5, the adversarial attack with only perturbation on the text cou…
-
Thank you for your awesome work.
What should the `placeholder_token` be for the i2p experiment?
Currently, it's ```--placeholder_token="" --initializer_token="art"```, but I'm asking if this is c…
-
Post your questions here about: “Image Learning” & “Audio and Video Learning”, Thinking with Deep Learning, Chapters 13 & 14.
-
Hello,
I am interested in using your DiffAttack on 1D sequences with the aim to make them adversarial against 1D Neural Net classifier (for the specific type of sequences ). I Have a few questions …
-
The inception model I reproduced couldn't do what you did. We usually have 229✖229 as input for that model, here it is 224✖224. does this have any effect please? Looking forward to your reply.
-
Hi Chen, great work for adversarial attack using diffusion models, I am trying to run your code but getting the following errors:
python main.py --model_name "inception" --save_dir output --images_…
-
Thanks for your excellent work.
I found that it took six hours to train just 1,000 images. This is certainly cost intensive. I would like to ask if this is a personal factor for me or for that model…