42Shawn / LLaVA-PruMerge

LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models
Apache License 2.0
85 stars 4 forks source link

Question about finetuning process #8

Closed ZiangWu-77 closed 4 months ago

ZiangWu-77 commented 4 months ago

Good work actually. I am replicating your work and wondering which dataset was used when you finetune the prumerge and prumerge+? Eager for your answer.

niiickZ commented 4 months ago

I have the same question. And I'm also curious about details of finetuning hyperparamters.

42Shawn commented 4 months ago

Basically, I have used the same finetuning setting inherited from the LLaVA, including the finetuning hyperparamters and datasets. For more training details, please refer to the original llava project.

ZiangWu-77 commented 4 months ago

Basically, I have used the same finetuning setting inherited from the LLaVA, including the finetuning hyperparamters and datasets. For more training details, please refer to the original llava project.

Reger that. Thanks for your reply.