Open Chloe1997 opened 5 months ago
Hi @Chloe1997,
Yes to train LLaVA with S2, you can't use the original pre-trained projector from LLaVA because the mm_hidden_size is different. You can either train the projector yourself or use the projector from the pre-trained LLaVA with S2. The mm_hidden_size in the config of the LLaVA-S2 checkpoint should be 3072 already so probably no need to change that.
To train LLaVA with S2, you can use the latest LLaVA repo which has S2 integrated (see the PR here), and apply an additional change here. Then you can train LLaVA with S2 just like how you train a regular LLaVA, except for two new configs added: s2=True
and s2_scales="336,672,1008"
. Please see the instructions of how to train LLaVA with S2 here.
Hi! Your work is great.. These day, I want to start finetuning Llava + S^2 wrapper with pretrained Llava from https://huggingface.co/liuhaotian/llava-v1.5-7b. However, I was struggling with the mm_hidden_size of pretrained Lllava projector. According o the following snippet, the error occured when the S^2 wrapper set the projector with the size of 3072 while the pretrained projector is 1024. I have tried to download the pretrained S^2 projector you provided and revise the mm_hidden_size in the config.json. Do you have any suggestion? Thanks you.