Open daixiangzi opened 2 months ago
MMDU can be applied to various LVLMs
We will release our model soon. You can also train your own model with MMUD by yourself. Train code depends on which model you are using.
hh,we are preparing to do this。
MMDU can be applied to various LVLMs
max image num is 20 in MMDU.in fact ,if I use llava3-clip-l14-336(max token is 8k),I think I need to use token compression,have you done any research in this area?
MMDU can be applied to various LVLMs
max image num is 20 in MMDU.in fact ,if I use llava3-clip-l14-336(max token is 8k),I think I need to use token compression,have you done any research in this area?
One of the purposes of MMDU-45k is to enhance the dialogue capabilities of LVLMs in long multi-modal contexts involving text and images. The maximum token length for MMDU-45k is 17k. During the finetuning of the model, we generally use lengths of 16k or 32k to train the model, without considering the issue of token compression.
The main data length distribution of MMDU-45k and the MMDU benchmark is around 8k. Therefore, using MMDU-45k to finetune an 8k-LVLM is also feasible.
I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.
I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.
MMDU has long-context use zero3.json
I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.
MMDU has long-context use zero3.json
I use zero3 in fact.but still oom
We will release our model soon. You can also train your own model with MMUD by yourself. Train code depends on which model you are using.