As shown in the Tab. 2, the preference training datasets have 150K samples from three data sources (ShareGPT-V, LLaVAR, LLaVA-Instruct). Do you have plans to release all ready-made preference datasets with negative responses (including Image-Weakened prompting and Error Injection)? I think the ready-made datasets can help the community and researchers follow your work better.
Thanks for your awesome work.
As shown in the Tab. 2, the preference training datasets have 150K samples from three data sources (ShareGPT-V, LLaVAR, LLaVA-Instruct). Do you have plans to release all ready-made preference datasets with negative responses (including Image-Weakened prompting and Error Injection)? I think the ready-made datasets can help the community and researchers follow your work better.