Open 9yte opened 4 years ago
For the random seed, they are not "random" random seeds, but some fixed and different ones. There should be two group of models (named with dropout & without dropout), and models from different groups are with different random seeds. In the first phase of the experiments, we did not name the models with the seeds. In the later stage, we realized we need more models so we started to add that. Frustratingly, you should not expect to get the same model trained with exactly the same code and same random seed on different machines, since setting the same random seed cannot guarantee reproducibility, as mentioned here.
As mentioned in Page 7 of the paper, the 18 models are for S1, where 6 different structures are used. They are with the same file name rules as the 12 models, so you only need to add GoogLeNet and MobileNetV2 into --substitute-models in the attack-transfer.sh script. In other words, the following should work:
python craft_poisons_transfer.py --gpu 0 --subs-chk-name ckpt-%s-4800-dp0.200-droplayer0.000-seed1226.t7 ckpt-%s-4800-dp0.250-droplayer0.000-seed1226.t7 ckpt-%s-4800-dp0.300-droplayer0.000.t7 --subs-dp 0.2 0.25 0.3 --substitute-nets DPN92 SENet18 ResNet50 ResNeXt29_2x64d GoogLeNet MobileNetV2 --target-index 1 --target-label 6 --poison-label 8
Thanks for your helpful and thorough response.
On our side, we try to reproduce your results (as kind of a baseline), but we get different results. I know, as you mentioned also, even with same random seeds, you are not guaranteed to get the same results. Anyway, I just wanted to double check with you if we do the comparison right.
We (1) execute the following command with TARGET_ID ranges from 1 to 50:
"python craft_poisons_transfer.py --gpu 0 --subs-chk-name ckpt-%s-4800-dp0.200-droplayer0.000-seed1226.t7 ckpt-%s-4800-dp0.250-droplayer0.000-seed1226.t7 ckpt-%s-4800-dp0.300-droplayer0.000.t7 --subs-dp 0.2 0.25 0.3 --substitute-nets DPN92 SENet18 ResNet50 ResNeXt29_2x64d GoogLeNet MobileNetV2 --target-index TARGET_ID --target-label 6 --poison-label 8"
(2) and average the attack success rate (the % of cases for which the target classified as class 8) across these 50 different targets.
I'm not sure if we chose the same models as target-net as you did. In particular, we (effectively) ran the above command with parameter --target-net as one of the following: "DPN92 SENet18 ResNet50 ResNeXt29_2x64d GoogLeNet MobileNetV2 ResNet18 DenseNet121". Is this right?
Anyway, we got these numbers as attack accuracy, which are a bit different compared to Figure 5: DPN92: 54 SENet18: 36 ResNet50: 72 ResNeXt29_2x64d: 42 GoogLeNet: 54 MobileNetV2: 30 ResNet18: 34 DenseNet121: 30
We have two explanations for this discrepancy: (1) we evaluate on different target-nets as you did. (2) sometimes, the confidence of the attack success is low, i.e., the confidence score of the target being classified as class 8 is slightly more than being classified as class 6. In fact, this is not a rare occasion. So, for this reason, we might had a bad luck, maybe? I'm not convinced if this happened though. It would be great if you resolve this uncertainty for us. We really appreciate it.
Thanks for your help.
Thanks for your interest in our work. For convenience of comparison, here I list the success rates from Figure 5:
DPN92: 54 SENet18: 34 ResNet50: 72 ResNeXt29_2x64d: 54 GoogLeNet: 46 MobileNetV2: 50 ResNet18: 52 DenseNet121: 42
In fact, you achieved better or almost the same success rates for half of the targets (DPN92, SENet18, ResNet50, GoogLeNet). The results with the black-box setting (ResNet18, DenseNet121) seem to be much worse, but the results with the gray-box ones (the other 6 networks) seem reasonable to me.
The target networks used in the paper should have been named in the format "ckpt-%s-4800.t7". You can try on these networks to eliminate the difference in the target nets. I hope I am getting the correct target models for you....
It is a pity that we did not report the mean and variance of the success rates in the paper. Since both the poisons and the target network have uncertainty, and our evaluated number of target images is only 50, it is possible to see relatively large differences. To verify whether we have been extremely lucky, you can try even more sets of different target networks. You should be more familiar with the experiments than I do now, so I would appreciate it if you can try on a set of different target nets and check the results.
Thanks for your quick response. We really appreciate that.
I did use target networks named in the format "ckpt-%s-4800.t7". I suspect if these are not the models you used in the paper as I just found another clue. I know this project is for more than one year ago, and it is not an easy thing to remember these details, so sorry in advance for the inconvenience.
Let me explain what I found. For some reason, I just tried to perform the attack with disabled dropout (dp=0.0) and I got a way more attack accuracy on network "ckpt-SENet18-4800.t7" compared to the case when dropout is enabled. This is the exact opposite trend I observed for other target nets (also as discussed in your paper). So, I suspected that if in this specific case, we have a white-box setting (instead of gray-box). In fact, the sha1sum of "ckpt-SENet18-4800.t7" and "ckpt-SENet18-4800-dp0.200-droplayer0.000-seed1226.t7" are same to each other ("88e0a636521dc7ee83f2d69df8465370c5d584e1"), i.e., we used this target network as our substitute network during attack. I did the check between all other substitute networks and target networks. The only duplicates are this pair. For now, I'm gonna replace "ckpt-SENet18-4800.t7" with a new trained model (with a different seed than 1226).
Again, I know it's not a very recent project and it is indeed hard to remember these things, but if you can figure out which target nets you used, that would be amazing. And of course, it helps us to perform a much more fair comparison.
Thanks for your help.
Hey Hojjat,
I must apologize for all the confusions! We did make the mistake you mentioned about the SENet checkpoint. In the beginning, I forgot to include the seed into the file name, but I did got a dp=0.2 checkpoint, so I wanted to make a symbolic link to that checkpoint. However, I linked to the SENet without dropout by mistake. As a result, all my experiments in the paper are running with this mistake! However, this should not make results on attacking other networks better, i.e., our results reported in the paper should probably be worse than the actual success rate we could have achieved. For your convenience, I have included that SENet checkpoint with dp=0.2 here.
About the discrepancy between the results, I have double checked the logs and now I am almost sure that we are using "ckpt-%s-4800.t7" in the paper. However, the poisons we used may be trained with different number of iterations because some of them were interrupted half way, due to the limit of time and the computing resource we could get. My script just takes the last checkpoint of the poisons for each target. And I think they are trained up to 5000 iterations instead of 4000 as mentioned in the paper. I have included the poisons we used for Figure 5 here. Notice that the poisons for target 2 is missing, but we counted as a failure directly. I have also included the logs for attacking the ResNet18 in Figure 5 here.
Could you try to get the results with more iterations, and even with the correct SENet? Sorry again for the inconvenience. We will try to update the results in the paper.
Hi, sorry for "hijacking" this thread but I have some questions that are related to model checkpoints.
Will the checkpoints in https://www.dropbox.com/s/7dorf2grr3vdgqt/model-chks.tgz?dl=0 be all the checkpoints needed to conduct the experiments in the paper (results in figures 5, 6, 7)?
I think so... Is there anything missing that you are aware of?
Thank you Zhuchen. I have managed to run the experiments.
Can I check if the poisoning has been successful if the output is the following? It seems like the target image label (6) is predicted correctly (6).
Advice/guidance would be appreciated :)
$ bash launch/attack-end2end.sh
Namespace(chk_path='chk-black', chk_subdir='poisons', dset_path='datasets', end2end=True, eval_poison_path='', gpu='1', lr_decay_epoch=[30, 45], model_resume_path='model-chks', nearest=False, num_per_class=50, original_grad=True, poison_decay_ites=[], poison_decay_ratio=0.1, poison_epsilon=0.1, poison_ites=4000, poison_label=8, poison_lr=0.04, poison_momentum=0.9, poison_num=5, poison_opt='adam', resume_poison_ite=0, retrain_bsize=64, retrain_epochs=60, retrain_lr=0.0001, retrain_momentum=0.9, retrain_opt='adam', retrain_wd=0.0005, subs_chk_name=['ckpt-%s-4800-dp0.200-droplayer0.000-seed1226.t7', 'ckpt-%s-4800-dp0.250-droplayer0.000-seed1226.t7', 'ckpt-%s-4800-dp0.300-droplayer0.000.t7'], subs_dp=[0.2, 0.25, 0.3], subset_group=0, substitute_nets=['DPN92', 'SENet18', 'ResNet50'], target_index=1, target_label=6, target_net='ResNet18', test_chk_name='ckpt-%s-4800.t7', tol=1e-06, train_data_path='datasets/CIFAR10_TRAIN_Split.pth')
... ... ...
Target Label: 6, Poison label: 8, Prediction:6, Target's Score:[-1.3546052, -0.97019506, -2.211053, -0.58111334, -1.7288998, -1.4619219, 7.3897023, -1.5813365, 3.330722, -0.73097295], Poisons' Predictions:[8, 8, 8, 8, 8]
2020-02-01 02:33:54, Epoch 59, Iteration 0, loss 0.000 (0.000), acc 100.000 (100.000)
2020-02-01 02:33:54, Epoch 59, Iteration 7, loss 0.000 (0.000), acc 100.000 (100.000)
Target Label: 6, Poison label: 8, Prediction:6, Target's Score:[-1.3544102, -0.969369, -2.2114637, -0.58122176, -1.7291251, -1.4623605, 7.388263, -1.5813239, 3.3316255, -0.7300077], Poisons' Predictions:[8, 8, 8, 8, 8]
2020-02-01 02:33:55 Epoch 59, Val iteration 0, acc 92.000 (92.000)
2020-02-01 02:33:57 Epoch 59, Val iteration 19, acc 93.400 (92.330)
* Prec: 92.33000144958496
end of output.
Hi Zhuchen, if you would be so kind as to help with a 2nd question about defending against poisons.
Agree a poisoned dataset looks similar to normal dataset to a human.
However, if the test dataset is sampled from the same distribution as the training dataset, won't the test performance have a dip in the poisoned class?
E.g. poisoned model on test dataset: class 1: good performance class 2: good performance class 3: good performance ... class 6 (poisoned): poor performance ... class 7: good peformance ... every other class good performance.
Sorry for the late reply. For your first question, no, the attack failed. :(
For your second question, in this paper we only attack one target image each time with 5 poison images. If you do not count the poisons into the data distribution, it should not affect the test set performance significantly.
Hi,
First of all, I want to thank you a lot for open sourcing your project. That means a lot!
I have a question. I'm kind of confused with the name of models. As far as I understood from your paper, in your experiments, the victim models are trained with a different seed than the substitute models. Now, I'm looking at "launch/attack-transfer.sh" script, and for dropouts 0.2, and 0.25, it seems you're using seed1226, but for dropout 0.3, the seed is not specified in the name. For the victim, also the seed is not included in the name of the victim's model. Would you please clarify this? Also, in general,
In general, I don't know when I select model x as a subs. net, and model y as the victim, how I become sure that these models are trained with different seeds?
Thanks for your time.