reds-lab / Narcissus

The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
https://arxiv.org/pdf/2204.05255.pdf
MIT License
104 stars 12 forks source link

Query Regarding a Potential Typo in the Narcissus.ipynb File #4

Closed LandAndLand closed 1 year ago

LandAndLand commented 1 year ago

I've been using your ipynb file recently and came across a minor confusion that I haven't seen discussed anywhere else. I thought I'd bring it up here and ask for clarification, in case I've misunderstood something. I hope you won't mind my question! image

pmzzs commented 1 year ago

You can remove this step since in the next line we clean the gradient in the optimizer, sorry for the repeat code.

LandAndLand commented 1 year ago

Thank you for your response!