issues
search
reds-lab
/
Narcissus
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack success rate.
https://arxiv.org/pdf/2204.05255.pdf
MIT License
105
stars
12
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Request for ViT poisons
#10
opooladz
opened
4 months ago
0
Segmentation fault (core dumped)
#9
1Opera
opened
8 months ago
0
Is the labeling correct?
#8
hideyuki-oiso
opened
11 months ago
1
How to implement attack in single-chennel dataset
#7
outouser
opened
11 months ago
1
Questions about the experimental setup
#6
vivien319
opened
1 year ago
2
Encountering an issue similar to "Problem with Attack Success Rate #2"
#5
LandAndLand
closed
7 months ago
0
Query Regarding a Potential Typo in the Narcissus.ipynb File
#4
LandAndLand
closed
1 year ago
2
Every time I run the code, the the values of ASR are different and vary widely
#3
vivien319
closed
1 year ago
4
Problem with Attack Success Rate
#2
nguyenhongson1902
closed
1 year ago
0
Little Understand
#1
RorschachChen
opened
2 years ago
12