RobustBench / robustbench

RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
https://robustbench.github.io
Other
669 stars 99 forks source link

[New Model] <Peng2023Robust> #151

Closed ShengYun-Peng closed 1 year ago

ShengYun-Peng commented 1 year ago

Paper Information

Leaderboard Claim(s)

Add here the claim for your model(s). Copy and paste the following subsection for the number of models you want to add.

Model 1

Model 2

Model Zoo:

fra31 commented 1 year ago

Hi,

thanks for the submission, we'll add the model as soon as possible.

fra31 commented 1 year ago

This adds the CIFAR-10 model. Could you please provide a model definition for the ImageNet classifier in a similar format (with all configurations listed as here)?

ShengYun-Peng commented 1 year ago

Thanks @fra31! Shall I commit to add_models_2 branch?

fra31 commented 1 year ago

Either that or you can just add here the model definition (I guess the definition of all layers is already present in your code, so just the final architecture, similar to NormalizedWideResNet(...) for CIFAR-10, would be needed), whatever it's more convenient for you.

ShengYun-Peng commented 1 year ago

The model is defined as slightly different from the CIFAR-10 WRN, but I'll add both models to the same file.

ShengYun-Peng commented 1 year ago

Hi @fra31, could you grant me access to the add_models_2 branch? I updated the code locally on my machine, but cannot push commits.

Screenshot 2023-09-05 at 2 53 23 PM
fra31 commented 1 year ago

Can you maybe create a PR instead? I can take care of it from there.

ShengYun-Peng commented 1 year ago

Sure, PR created: https://github.com/RobustBench/robustbench/pull/153

fra31 commented 1 year ago

Great, thanks!

fra31 commented 1 year ago

Added the models with https://github.com/RobustBench/robustbench/pull/154. Is this preprocessing for ImageNet the right one?

ShengYun-Peng commented 1 year ago

Right, my test_transform is the same as Res256Crop224.

fra31 commented 1 year ago

Just another question: are the reported clean and robust accuracy for ImageNet on the entire validation set (50k points) rather than on the subset (5k points) we use for the leaderboard?

ShengYun-Peng commented 1 year ago

Only the 5k subset is used for AutoAttack on ImageNet. The configs are here and loaded here. The ImageNet was tested months ago, so I couldn't find the log. Sorry if that causes the confusion above.

fra31 commented 1 year ago

It's just because for our 5k points subset I get 73.10% clean accuracy, while 73.448% for the full validation, which is much closer to (matches, depending on how one rounds) what you report. It might be also a difference in some package version though.

ShengYun-Peng commented 1 year ago

That makes sense.

It's just because for our 5k points subset I get 73.10% clean accuracy, while 73.448% for the full validation, which is much closer to (matches, depending on how one rounds) what you report. It might be also a difference in some package version though.

fra31 commented 1 year ago

Do you mean that the clean accuracy is reported for the full validation set, while robust accuracy for the subset?

ShengYun-Peng commented 1 year ago

Oh, I meant the package differences. As shown here, I'm directly using the benchmark API from the robustbench, so there's no way the clean accuracy is on the full validation set, while the adversarial is only 5k.

fra31 commented 1 year ago

The leaderboards should be updated.

ShengYun-Peng commented 1 year ago

Thanks so much! @fra31