Open Z-MU-Z opened 4 months ago
In https://github.com/tsb0601/MMVP/blob/main/LLaVA/finetune.sh#L20, I notice that --per_device_train_batch_size is 11, however in paper appendix
Table 4. Hyperparameters for MoF training on LLaVA and LLaVA-1.5
. the LLaVA-1.5 stage2 Traing batchsize is 128. Have I misunderstood? It seems that --per_device_train_batch_size should be set to 8.
In https://github.com/tsb0601/MMVP/blob/main/LLaVA/finetune.sh#L20, I notice that --per_device_train_batch_size is 11, however in paper appendix
. the LLaVA-1.5 stage2 Traing batchsize is 128. Have I misunderstood? It seems that --per_device_train_batch_size should be set to 8.