Paper: Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
link: https://arxiv.org/pdf/2306.14565.pdf
Name: LRV-Instruction
Focus: Multimodal
Notes: A benchmark to evaluate the hallucination and instruction following ability
bib:
@article{liu2023aligning,
title={Aligning Large Multi-Modal Model with Robust Instruction Tuning},
author={Liu, Fuxiao and Lin, Kevin and Li, Linjie and Wang, Jianfeng and Yacoob, Yaser and Wang, Lijuan},
journal={arXiv preprint arXiv:2306.14565},
year={2023}
}
@FuxiaoLiu Thanks for the recommendation. The survey paper is currently under review. We will add your work to the revised version after receiving the first-round review:)
Paper: Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning link: https://arxiv.org/pdf/2306.14565.pdf Name: LRV-Instruction Focus: Multimodal Notes: A benchmark to evaluate the hallucination and instruction following ability
bib: @article{liu2023aligning, title={Aligning Large Multi-Modal Model with Robust Instruction Tuning}, author={Liu, Fuxiao and Lin, Kevin and Li, Linjie and Wang, Jianfeng and Yacoob, Yaser and Wang, Lijuan}, journal={arXiv preprint arXiv:2306.14565}, year={2023} }