Closed mo666666 closed 5 months ago
Hi, thanks for your submission. I will add the model as soon as possible!
Hi @mo666666, have you closed the issue because you don't want your model to be added anymore or for some other reason?
Hello, dedeswim!
Very sorry for the late reply. We closed the issue because we performed experiments in a private cluster during the submission. It does not support downloading the checkpoint to the local computer. However, recently, we finally got enough computational resources (8*A100) to assist us in running the experiments on the ImageNet dataset. We found our reported result is reproducible and the difference is quite low (<0.10%). Would you mind adding our provided checkpoint and reported results to the robustbench benchmark? We believe it is the most important benchmark in the adversarial community. Our checkpoint might be helpful for some researchers to help them understand the robustness of ViTs.
Apologize again for the delayed response and your inconvenience.
Best regards,
Yichuan
Add here the claim for your model(s). Copy and paste the following subsection for the number of models you want to add.
Architecture: swin_base_patch4_window7_224
Threat Model: Linf
eps: 4/255
Clean accuracy: 74.47%
Robust accuracy: 38.61%
Additional data: true
Evaluation method: AutoAttack
Checkpoint and code: checkpoint, code
Link for our evaluation log: log
Architecture: vit_base_patch4_window7_224
Threat Model: Linf
eps: 4/255
Clean accuracy: 69.07%
Robust accuracy: 34.69%
Additional data: true
Evaluation method: AutoAttack
Link for our evaluation log: log
timm
. If not, I added the link to the architecture implementation so that it can be added.Hi,
thanks for the submission, I'll add the models in the next few days.
Glad to hear that! Thank you for your kind assistance!
It seems that the checkpoints and logs are not accessible, probably need to be made public.
Sorry for the wrong setup. Are they accessible now?
Yeah, it seems to work, thanks.
You are welcome! If any other bugs remain, feel free to contact me!
I've started adding the models here. I've tried to follow your implementation, but I get a parameter size mismatch when loading the Swin-B (it works fine for the ViT-B). Also, can you confirm that the preprocessing is the correct one?
Thank you for your effort. Could you share the mismatch with me when you load our checkpoint? Although we trained the model with the swin_base_patch4_window7_224_in22k
API, but it seems that the parameter can be also loaded with the swin_base_patch4_window7_224
API. I think both the following implementations will work fine:
('Mo2022When_Swin-B', {
'model': lambda: normalize_model(timm.create_model(
'swin_base_patch4_window7_224_in22k', pretrained=False, num_classes=1000), mu, sigma),
'gdrive_id': '1-SXi4Z2X6Zo_j8EO4slJcBMXNej8fKUd',
'preprocessing': 'Res224',
}),
('Mo2022When_Swin-B', {
'model': lambda: normalize_model(timm.create_model(
'swin_base_patch4_window7_224', pretrained=False), mu, sigma),
'gdrive_id': '1-SXi4Z2X6Zo_j8EO4slJcBMXNej8fKUd',
'preprocessing': 'Res224',
}),
Ok, it was because of some changes in timm
, it should be fixed with https://github.com/RobustBench/robustbench/commit/c5e1f56a5eea26f12bcac44fd24661feb204ce02.
Glad to hear that! Thank you for sharing me with this change.
Added the models in https://github.com/RobustBench/robustbench/pull/185. From your logs its seems the (robust) accuracy was computed on the entire validation set, so I re-computed it on our subset of 5k points (see json files for details). Please let me know if there's anything to modify.
Thank you so much for your hard work. By the way, where can I see your test json files? You are right. Our robustness and accuracy are evaluated on the whole ImageNet-1k validation set. I think it is normal for the small differences in the final results.
Your implementation is satisfying and quite good for me. Thank you again for your kind help.
The json files are those added with https://github.com/RobustBench/robustbench/commit/ae5814d290576ce3664ecddf56f0a51db27de374.
OK. Thank you for your diligent help. The difference seems marginal. I am happy that our results are added to the Robustbench Benchmark. I will close this issue. Best wishes!
Paper Information
Leaderboard Claim(s)
Add here the claim for your model(s). Copy and paste the following subsection for the number of models you want to add.
Model 1