The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
I've been using your ipynb file recently and came across a minor confusion that I haven't seen discussed anywhere else. I thought I'd bring it up here and ask for clarification, in case I've misunderstood something. I hope you won't mind my question!
I've been using your ipynb file recently and came across a minor confusion that I haven't seen discussed anywhere else. I thought I'd bring it up here and ask for clarification, in case I've misunderstood something. I hope you won't mind my question!