showlab / Awesome-MLLM-Hallucination

📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).
481 stars 14 forks source link

Happy to contribute our recent work RITUAL! (Mitigating hallucinations in MLLMs) #6

Closed sangminwoo closed 5 months ago

sangminwoo commented 5 months ago

Hi @JosephPai,

Thank you for curating this fantastic survey repository! Please consider adding our recent work to the list. We are happy to contribute!

RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs [Paper | Code | Project]

In this work, we propose RITUAL, a straightforward, training-free method to enhance robustness against hallucinations in Large Vision-Language Models (LVLMs). RITUAL employs random image transformations to complement the original probability distribution, aiming to reduce the likelihood of hallucinatory visual explanations by exposing the model to a diverse range of visual scenarios.

Thank you!

JosephPai commented 5 months ago

Hi @sangminwoo , Thanks for your interests in this repo. The paper has been added to the Hallucination Mitigation section according to its release date. Very interesting work!