reds-lab / Narcissus

The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
https://arxiv.org/pdf/2204.05255.pdf
MIT License
105 stars 12 forks source link

Questions about the experimental setup #6

Open vivien319 opened 1 year ago

vivien319 commented 1 year ago

Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256 How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on. I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.

pmzzs commented 10 months ago

I use 200 classes from CelebA to train the model, but if you want to include more classes, I think it will be fine. For the second question, yes, the pipeline is the same across all the experiments.

PZMDSB commented 9 hours ago

请问您能提供一下这些数据集的相关代码吗