reds-lab / Narcissus

The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
https://arxiv.org/pdf/2204.05255.pdf
MIT License
105 stars 12 forks source link

Encountering an issue similar to "Problem with Attack Success Rate #2" #5

Closed LandAndLand closed 7 months ago

LandAndLand commented 1 year ago

Problem Description: I hope this message finds you well. I have been working with your project and have encountered an issue similar to "Problem with Attack Success Rate #2" mentioned in the repository. I'm attempting to adapt the provided "best noise" to a different dataset, specifically the GTSRB dataset for German Traffic Sign Recognition. Request for Guidance: I am reaching out to kindly request your guidance on how to train a noise pattern similar to the "best noise" you provided in order to effectively trigger the model on the GTSRB dataset. Since your expertise has been demonstrated in your work, I believe your insights would be invaluable in helping me achieve consistent results. Thank you for your time and consideration.