facebookresearch / unibench

Python Library to evaluate VLM models' robustness across diverse benchmarks
Other
167 stars 10 forks source link

Adversarial attacks #3

Open HashmatShadab opened 2 months ago

HashmatShadab commented 2 months ago

Hi! Thanks for sharing your work.

I would like to know are adversarial attacks also included in the benchmark? If yes, which type of attacks have been included?

haideraltahan commented 2 months ago

Hi @HashmatShadab

We have ImageNet-C (https://github.com/hendrycks/robustness) but certainly other benchmarks could be added using example in the README!

Please let us know if there are particular benchmarks you would like to see in UniBench and I'll try to add it to the library