tsb0601 / MMVP

260 stars 7 forks source link

LLaVA-1.5 stage2 Traing batchsize #13

Open Z-MU-Z opened 4 months ago

Z-MU-Z commented 4 months ago

In https://github.com/tsb0601/MMVP/blob/main/LLaVA/finetune.sh#L20, I notice that --per_device_train_batch_size is 11, however in paper appendix

Table 4. Hyperparameters for MoF training on LLaVA and LLaVA-1.5

. the LLaVA-1.5 stage2 Traing batchsize is 128. Have I misunderstood? It seems that --per_device_train_batch_size should be set to 8.