The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256
How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on.
I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.
I use 200 classes from CelebA to train the model, but if you want to include more classes, I think it will be fine. For the second question, yes, the pipeline is the same across all the experiments.
Your work is amazing, but I have some questions regarding the limited descriptions in your paper about the experiments on PubFig & CelebA and Tiny-ImageNet & Caltech-256 How many classes does the CelebA dataset use for training?Are these experimental settings about PubFig & CelebA and Tiny-ImageNet & Caltech-256 the same as the CIFAR10/Tiny-ImageNet experiment: The argumention of POOD for surrogate model training stage, training epochs during attack phase, parameter settings for optimizer,random seeds and so on. I would appreciate it if you could provide the related code on these datasets or more specific experimental setup.