issues
search
bolunwang
/
backdoor
Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
https://sandlab.cs.uchicago.edu/
MIT License
267
stars
63
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
URL does not exist
#18
chengyiqiu1121
opened
8 months ago
0
the code
#17
KDluo
opened
1 year ago
1
reg_best does not converge
#16
snutesh
opened
1 year ago
0
The download link of data is invaild
#15
Kolt1911
opened
1 year ago
0
Is this detection method a white-box setting?
#14
tonggege001
closed
3 years ago
2
where is implementation on partial backdoor attack?
#13
Kataang
closed
2 years ago
1
The reversed mask of the targetted label can't converge on MNIST dataset
#12
Crane-Mocker
closed
3 years ago
1
y_true and y_pred position
#11
shihongf
closed
3 years ago
2
implementation on other datasets?
#10
ZJZAC
closed
3 years ago
1
Adaptation for partial backdoor attack
#9
Xavierxhq
closed
4 years ago
2
Watermark pattern might be incorrect
#8
mvillarreal14
closed
4 years ago
1
Reverse-engineered triggers: can you share them?
#7
mvillarreal14
closed
4 years ago
1
Information about VGGFace models is missing: can you add it?
#6
mvillarreal14
closed
4 years ago
4
About data poisoning
#5
tiroshenao
closed
4 years ago
1
As for detailed model construction file
#4
Yi2Zhao
closed
5 years ago
4
trigger pattern mentioned in the paper
#3
jiashenC
closed
5 years ago
2
The Reverse Engineering output is not correct
#2
kaidi-jin
closed
5 years ago
4
Detection Ineffective on MNIST model
#1
ChengFu0118
closed
5 years ago
3