serre-lab / Adversarial-Alignment

Scaling-up deep neural networks to improve their performance on ImageNet makes them more tolerant to adversarial attacks, but successful attacks on these models are misaligned with human perception.
https://serre-lab.github.io/Adversarial-Alignment/
MIT License
6 stars 1 forks source link

data release request #1

Open NZ42 opened 1 month ago

NZ42 commented 1 month ago

hi! is there any chance the CSVs used to produce the various graphs could be released? for figure 2 and figure 3 in particular if possible. I see they are supposed to be case-specific .csv files (possibly on your google drive?) but I'm looking to access the data without having to reproduce your experiments if possible.

thanks!

cc @TonyFPY @fel-thomas

TonyFPY commented 1 month ago

Hi, Thank you for your message! We appreciate that you have paid much attention to our work. The code we uploaded has some minor errors and we are still fixing the issues proposed by reviewers. The trend of Figures 2 and 3 remains correct, but the models should be re-evaluated. Once we finish our experiments, we are willing to let you know. Thanks!

NZ42 commented 1 month ago

understood, thank you!