Closed ngoctnq closed 2 years ago
Hi, this is indeed weird! Thanks a lot for opening the issue and for providing an MWE. I'll look into it ASAP
I found and fixed the issue, thanks again for finding it! Just a small note about your MWE: here you shouldn't be calling model.model(x)
, but just model(x)
. If you call the former, you won't apply normalization to the inputs, which will lead to inaccurate results
Sorry for the mistake, hope it didn’t lead you astray while bugfinding! I probably messed up the code cleaning when creating the MWE somehow.
On Nov 9, 2022, at 23:55, Edoardo Debenedetti @.***> wrote:
I found and fixed the issue, thanks again for finding it! Just a small note about your MWE: here you shouldn't be calling model.model(x), but just model(x). If you call the former, you won't apply normalization to the inputs, which will lead to inaccurate results
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.
No worries, I noticed it right away! I hope this bug didn't cause big issues for your project 😊
Using RobustBench's
load_model
gives a model that predicts with a random-like accuracy. Meanwhile, manually loading the checkpoint works. MWE: https://colab.research.google.com/drive/1l4RhImKkAvEOTzIhrZjeGA0xdh1B4M4u?usp=sharing