reds-lab / Narcissus

The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
https://arxiv.org/pdf/2204.05255.pdf
MIT License
100 stars 11 forks source link

Problem with Attack Success Rate #2

Closed nguyenhongson1902 closed 1 year ago

nguyenhongson1902 commented 1 year ago

Hello, thank you for your work. I've tried running your code to train a trigger and then displaying my trigger and your trigger (resnet18_trigger.npy). I found that there was something different between mine and yours. You can see the images below: my_trigger your_trigger (The above image is my trigger, and the below is yours) After that, I tried running experiments with my trigger and your trigger to see the performance (Training ACC, Clean test ACC, Attack Success Rate, Target class clean test ACC). In terms of Training ACC, Clean test ACC, Target class clean test ACC, they are kind of the same as when using your trigger. But the difference here lies in the Attack Success Rate, when I run with my trigger, the Attack Success Rate sort of levels off and remains no more than 0.1. I'll put the 2 images down below for you to see the effect: result_resnet18_trigger_20230310 result_resnet18_trigger (The above is when using my trigger, and the below is when using your trigger (resnet18_trigger.npy)) Note that I only change the code to load the trigger, and everywhere else remains intact.