RobustBench / robustbench

RobustBench: a standardized adversarial robustness benchmark [NeurIPS'21 Benchmarks and Datasets Track]
https://robustbench.github.io
Other
650 stars 97 forks source link

[New Model] <Mo2022When> #104

Closed mo666666 closed 2 months ago

mo666666 commented 1 year ago

Paper Information

Leaderboard Claim(s)

Add here the claim for your model(s). Copy and paste the following subsection for the number of models you want to add.

Model 1

dedeswim commented 1 year ago

Hi, thanks for your submission. I will add the model as soon as possible!

dedeswim commented 1 year ago

Hi @mo666666, have you closed the issue because you don't want your model to be added anymore or for some other reason?

mo666666 commented 3 months ago

Hello, dedeswim!

Very sorry for the late reply. We closed the issue because we performed experiments in a private cluster during the submission. It does not support downloading the checkpoint to the local computer. However, recently, we finally got enough computational resources (8*A100) to assist us in running the experiments on the ImageNet dataset. We found our reported result is reproducible and the difference is quite low (<0.10%). Would you mind adding our provided checkpoint and reported results to the robustbench benchmark? We believe it is the most important benchmark in the adversarial community. Our checkpoint might be helpful for some researchers to help them understand the robustness of ViTs.

Apologize again for the delayed response and your inconvenience.

Best regards,

Yichuan

mo666666 commented 3 months ago

Paper Information

Leaderboard Claim(s)

Add here the claim for your model(s). Copy and paste the following subsection for the number of models you want to add.

Model 1

Model Zoo:

fra31 commented 3 months ago

Hi,

thanks for the submission, I'll add the models in the next few days.

mo666666 commented 3 months ago

Glad to hear that! Thank you for your kind assistance!

fra31 commented 3 months ago

It seems that the checkpoints and logs are not accessible, probably need to be made public.

mo666666 commented 3 months ago

Sorry for the wrong setup. Are they accessible now?

fra31 commented 3 months ago

Yeah, it seems to work, thanks.

mo666666 commented 3 months ago

You are welcome! If any other bugs remain, feel free to contact me!

fra31 commented 3 months ago

I've started adding the models here. I've tried to follow your implementation, but I get a parameter size mismatch when loading the Swin-B (it works fine for the ViT-B). Also, can you confirm that the preprocessing is the correct one?

mo666666 commented 3 months ago

Thank you for your effort. Could you share the mismatch with me when you load our checkpoint? Although we trained the model with the swin_base_patch4_window7_224_in22k API, but it seems that the parameter can be also loaded with the swin_base_patch4_window7_224 API. I think both the following implementations will work fine:

        ('Mo2022When_Swin-B', {
            'model': lambda: normalize_model(timm.create_model(
                'swin_base_patch4_window7_224_in22k', pretrained=False, num_classes=1000), mu, sigma),
            'gdrive_id': '1-SXi4Z2X6Zo_j8EO4slJcBMXNej8fKUd',
            'preprocessing': 'Res224',
        }),
        ('Mo2022When_Swin-B', {
            'model': lambda: normalize_model(timm.create_model(
                'swin_base_patch4_window7_224', pretrained=False), mu, sigma),
            'gdrive_id': '1-SXi4Z2X6Zo_j8EO4slJcBMXNej8fKUd',
            'preprocessing': 'Res224',
        }),
fra31 commented 3 months ago

Ok, it was because of some changes in timm, it should be fixed with https://github.com/RobustBench/robustbench/commit/c5e1f56a5eea26f12bcac44fd24661feb204ce02.

mo666666 commented 3 months ago

Glad to hear that! Thank you for sharing me with this change.

fra31 commented 2 months ago

Added the models in https://github.com/RobustBench/robustbench/pull/185. From your logs its seems the (robust) accuracy was computed on the entire validation set, so I re-computed it on our subset of 5k points (see json files for details). Please let me know if there's anything to modify.

mo666666 commented 2 months ago

Thank you so much for your hard work. By the way, where can I see your test json files? You are right. Our robustness and accuracy are evaluated on the whole ImageNet-1k validation set. I think it is normal for the small differences in the final results.

Your implementation is satisfying and quite good for me. Thank you again for your kind help.

fra31 commented 2 months ago

The json files are those added with https://github.com/RobustBench/robustbench/commit/ae5814d290576ce3664ecddf56f0a51db27de374.

mo666666 commented 2 months ago

OK. Thank you for your diligent help. The difference seems marginal. I am happy that our results are added to the Robustbench Benchmark. I will close this issue. Best wishes!