Liuziyu77 / MMDU

Official repository of MMDU dataset
Apache License 2.0
61 stars 1 forks source link

Do you plan to release train code ? #1

Open daixiangzi opened 2 months ago

Liuziyu77 commented 2 months ago

We will release our model soon. You can also train your own model with MMUD by yourself. Train code depends on which model you are using.

Liuziyu77 commented 2 months ago

MMDU can be applied to various LVLMs

daixiangzi commented 2 months ago

We will release our model soon. You can also train your own model with MMUD by yourself. Train code depends on which model you are using.

hh,we are preparing to do this。

daixiangzi commented 2 months ago

MMDU can be applied to various LVLMs

max image num is 20 in MMDU.in fact ,if I use llava3-clip-l14-336(max token is 8k),I think I need to use token compression,have you done any research in this area?

Liuziyu77 commented 2 months ago

MMDU can be applied to various LVLMs

max image num is 20 in MMDU.in fact ,if I use llava3-clip-l14-336(max token is 8k),I think I need to use token compression,have you done any research in this area?

One of the purposes of MMDU-45k is to enhance the dialogue capabilities of LVLMs in long multi-modal contexts involving text and images. The maximum token length for MMDU-45k is 17k. During the finetuning of the model, we generally use lengths of 16k or 32k to train the model, without considering the issue of token compression.

Liuziyu77 commented 2 months ago

The main data length distribution of MMDU-45k and the MMDU benchmark is around 8k. Therefore, using MMDU-45k to finetune an 8k-LVLM is also feasible.

daixiangzi commented 2 months ago

I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.

Liuziyu77 commented 2 months ago

I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.

image MMDU has long-context use zero3.json

daixiangzi commented 2 months ago

I tried fine-tuning clip_l14_336-llama3-8b using mmdu, and even with a batch size of 1, it still runs out of memory on an 80G A100.

image MMDU has long-context use zero3.json

I use zero3 in fact.but still oom