chs20 / RobustVLM

[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
MIT License
67 stars 3 forks source link

Classification evaluation for LLaVA #4

Open rishika2110 opened 4 months ago

rishika2110 commented 4 months ago

Hi, currently, the code throws a NotImplementedError for LLaVA, but I believe the paper demonstrates zero-shot classification on LLaVA. When will the code be updated to include this feature? Alternatively, could you point out the main parts that would need significant changes to incorporate LLaVA?

Thank you.

chs20 commented 4 months ago

Hi, thanks for asking. We demonstrate zero-shot classification only for the CLIP models on their own and consider LLaVA and OpenFlamingo for captioning/VQA tasks.

rishika2110 commented 4 months ago

Thank you for the clarification. I have another question: Why is the batch size hardcoded to 1? Is it just to avoid padding text tokens? Or am I missing something?

chs20 commented 4 months ago

You're right, it should definitely be possible to run with larger batch sizes, it's just hardcoded to batch_size 1 in a few places since we couldn't fit much more on our devices anyway for adversarial evaluations

rishika2110 commented 2 months ago

Hi, thank you so much for clarifying everything. Just one last question: does the code use beam search to generate the outputs?

chs20 commented 1 month ago

No problem :) We basically stick to how the models are evaluated in their respective papers, so greedy decoding without beam-search for LLaVA, and beam search with 3 beams for OpenFlamingo.