hkunzhe / label_consistent_attacks_pytorch

A minimal PyTorch implementation of Label-Consistent Backdoor Attacks
MIT License
27 stars 2 forks source link

About the attack success rate #3

Closed clee-jaist closed 1 year ago

clee-jaist commented 1 year ago

Hi, I did a little test today, and the parameter settings are the same as those in your previous code. I found that the poison test acc is basically above 90%. Does this refer to the attack success rate? I checked the evaluated code and found that the poison test and clean test codes are exactly the same, which makes me a little bit confused. I show the last 5 epochs of poison_train

Thank you so much for your time

................................................................................................... ===Epoch: 196/200=== Poison training... ------------- --------- --------- Iteration loss acc
0/391 0.00037 1.00000
130/391 0.00073 1.00000
260/391 0.00076 1.00000
390/391 0.00817 1.00000
Training summary: --------- --------- ---------- loss acc time
0.00120 0.99996 11.87119
Test model on clean data... ------------- --------- --------- Iteration loss acc
0/79 0.24480 0.96094
26/79 0.37069 0.91406
52/79 0.25858 0.91406
78/79 0.53093 0.93750
--------- --------- ---------
loss acc time
--------- --------- ---------
0.23299 0.94581 0.86842
Test model on poison data... ------------- --------- --------- Iteration loss acc
0/71 0.08573 0.97656
23/71 0.03288 1.00000
46/71 0.03576 0.99219
69/71 0.02621 1.00000
--------- --------- ---------
loss acc time
--------- --------- ---------
0.04901 0.98834 0.78306
Adjust learning rate to 0.0010000000000000002 ===Epoch: 197/200=== Poison training... ------------- --------- --------- Iteration loss acc
0/391 0.00092 1.00000
130/391 0.00142 1.00000
260/391 0.00096 1.00000
390/391 0.00048 1.00000
Training summary: --------- --------- ---------- loss acc time
0.00112 0.99994 12.01725
Test model on clean data... ------------- --------- --------- Iteration loss acc
0/79 0.23334 0.96094
26/79 0.34683 0.91406
52/79 0.24993 0.91406
78/79 0.53390 0.93750
--------- --------- ---------
loss acc time
--------- --------- ---------
0.22779 0.94689 0.90150
Test model on poison data... ------------- --------- --------- Iteration loss acc
0/71 0.12132 0.97656
23/71 0.06421 0.98438
46/71 0.06803 0.98438
69/71 0.05093 0.99219
--------- --------- ---------
loss acc time
--------- --------- ---------
0.07787 0.97810 0.81564
Adjust learning rate to 0.0010000000000000002 ===Epoch: 198/200=== Poison training... ------------- --------- --------- Iteration loss acc
0/391 0.00123 1.00000
130/391 0.00086 1.00000
260/391 0.00099 1.00000
390/391 0.00146 1.00000
Training summary: --------- --------- ---------- loss acc time
0.00122 0.99994 11.99979
Test model on clean data... ------------- --------- --------- Iteration loss acc
0/79 0.23626 0.96094
26/79 0.35506 0.92188
52/79 0.25306 0.91406
78/79 0.53051 0.93750
--------- --------- ---------
loss acc time
--------- --------- ---------
0.23139 0.94591 0.85511
Test model on poison data... ------------- --------- --------- Iteration loss acc
0/71 0.08064 0.97656
23/71 0.03207 1.00000
46/71 0.03732 0.99219
69/71 0.02520 1.00000
--------- --------- ---------
loss acc time
--------- --------- ---------
0.04738 0.98900 0.79475
Adjust learning rate to 0.0010000000000000002 ===Epoch: 199/200=== Poison training... ------------- --------- --------- Iteration loss acc
0/391 0.00082 1.00000
130/391 0.00040 1.00000
260/391 0.00080 1.00000
390/391 0.00155 1.00000
Training summary: --------- --------- ---------- loss acc time
0.00119 0.99998 12.02322
Test model on clean data... ------------- --------- --------- Iteration loss acc
0/79 0.24507 0.96094
26/79 0.36788 0.91406
52/79 0.25810 0.90625
78/79 0.52386 0.93750
--------- --------- ---------
loss acc time
--------- --------- ---------
0.23248 0.94630 0.86541
Test model on poison data... ------------- --------- --------- Iteration loss acc
0/71 0.11731 0.96875
23/71 0.06122 0.99219
46/71 0.07014 0.97656
69/71 0.04956 0.99219
--------- --------- ---------
loss acc time
--------- --------- ---------
0.07854 0.97777 0.80501
Adjust learning rate to 0.0010000000000000002 ===Epoch: 200/200=== Poison training... ------------- --------- --------- Iteration loss acc
0/391 0.00054 1.00000
130/391 0.00046 1.00000
260/391 0.00050 1.00000
390/391 0.00155 1.00000
Training summary: --------- --------- ---------- loss acc time
0.00115 0.99992 12.06870
Test model on clean data... ------------- --------- --------- Iteration loss acc
0/79 0.24703 0.96094
26/79 0.36921 0.91406
52/79 0.25322 0.90625
78/79 0.54743 0.93750
--------- --------- ---------
loss acc time
--------- --------- ---------
0.23348 0.94600 0.84732
Test model on poison data... ------------- --------- --------- Iteration loss acc
0/71 0.12039 0.97656
23/71 0.06394 0.98438
46/71 0.07073 0.98438
69/71 0.05400 0.99219
--------- --------- ---------
loss acc time
--------- --------- ---------
0.07907 0.97755 0.81543

Adjust learning rate to 0.0010000000000000002

Process finished with exit code 0 .......................................................................................................

hkunzhe commented 1 year ago
  1. The poison test acc is just the attack success rate.
  2. The poison test data is wrapped with CleanLabelDataset. That is to say, triggers are added (no adversarial noise), and the labels remain consistent. Please refer to the code for details.
clee-jaist commented 1 year ago
  1. The poison test acc is just the attack success rate.
  2. The poison test data is wrapped with CleanLabelDataset. That is to say, triggers are added (no adversarial noise), and the labels remain consistent. Please refer to the code for details.

Thank you for your reply. I checked the details of the evaluation code, and found that you are right.