Yufang-Liu / clip_hallucination

[EMNLP 2024] Investigating and Mitigating Object Hallucinations in Pretrained Vision-Language (CLIP) Models
5 stars 0 forks source link

How to evaluate the LLaVA #4

Open LiqiangJing opened 1 week ago

LiqiangJing commented 1 week ago

Hi,

Could you tell me how to evaluate LLaVA with the enhanced CLIP you trained?

Yufang-Liu commented 1 week ago

You can find the LoRA parameters for LLAVA in here. The clip folder contains the enhanced CLIP model parameters. The config.json file already specifies the location of the CLIP model, so you can directly load the LoRA parameters for LLAVA.