facebookresearch / unibench

Python Library to evaluate VLM models' robustness across diverse benchmarks
Other
169 stars 11 forks source link

Adversarial attacks #3

Open HashmatShadab opened 3 months ago

HashmatShadab commented 3 months ago

Hi! Thanks for sharing your work.

I would like to know are adversarial attacks also included in the benchmark? If yes, which type of attacks have been included?

haideraltahan commented 3 months ago

Hi @HashmatShadab

We have ImageNet-C (https://github.com/hendrycks/robustness) but certainly other benchmarks could be added using example in the README!

Please let us know if there are particular benchmarks you would like to see in UniBench and I'll try to add it to the library