For example, for LLaVA fine-tuning, you can first convert our data to LLaVA format using this script and append it to the original LLaVA fine-tuning data for mixed fine-tuning or append to randomly selected 5k (or 10k) LLaVA fine-tuning data for post-hoc fine-tuning. Let me know if you encounter any problems with this process.
Hi, thanks for your interest. We used the exactly same fine-tuning script from the original LLaVA (https://github.com/haotian-liu/LLaVA/blob/main/scripts/v1_5/finetune.sh) and MiniGPT-4 (https://github.com/Vision-CAIR/MiniGPT-4/blob/main/MiniGPTv2_Train.md).
For example, for LLaVA fine-tuning, you can first convert our data to LLaVA format using this script and append it to the original LLaVA fine-tuning data for mixed fine-tuning or append to randomly selected 5k (or 10k) LLaVA fine-tuning data for post-hoc fine-tuning. Let me know if you encounter any problems with this process.