Open Bilibilee opened 4 days ago
Thanks for your interest in our work. There might be some small code typos when we push on github, while you could simply fix them for further usage.
Hello, I am confused about the inconsistencies between the first training stage and the MLLMSD training stage:
<img>,<img_0>,...,<img_31>
), with only the llm_head weight and embed_token weight corresponding to the new tokens being trained.<img>,<img_start>,<img_end>,<img_0>,...<img_31>
).This discrepancy in the number of new tokens causes the MLLMSD model's load_pretrain_MLLM_alignment
function to fail.
In the first training stage, the LLama checkpoint is loaded, but in the MLLMSD training stage, the LLava checkpoint is loaded, which is puzzling. Why not directly align LLava with CLIP?"
Thank you for your excellent work, but the open-source code indeed has many minor issues, which makes others hesitant to follow your work. During the TrainStage1 phase, the issues are as follows:
torchrun --nproc_per_node=8 --master_port=20001 fastchat/train/TrainStage1.py
fastchat directory seemingly doesn't exist,it should betrain/TrainStage1.py
.load_LLaVA_ckpt_v1_1
should beload_LLaVA_ckpt_v1_1_7b
.Could you provide Trainstage1 result checkpoint.