TencentARC / SmartEdit

Official code of SmartEdit [CVPR-2024 Highlight]
237 stars 8 forks source link

there are lots of bugs in TrainStage1 #33

Open Bilibilee opened 4 days ago

Bilibilee commented 4 days ago

Thank you for your excellent work, but the open-source code indeed has many minor issues, which makes others hesitant to follow your work. During the TrainStage1 phase, the issues are as follows:

  1. code torchrun --nproc_per_node=8 --master_port=20001 fastchat/train/TrainStage1.py fastchat directory seemingly doesn't exist,it should be train/TrainStage1.py.
  2. code load_LLaVA_ckpt_v1_1 should be load_LLaVA_ckpt_v1_1_7b.
  3. code SD_QFormer_conversation_33tokens ckpt doesn't have mm_projector module, which didn't used in train stage 1.

Could you provide Trainstage1 result checkpoint.

yuzhou914 commented 3 days ago

Thanks for your interest in our work. There might be some small code typos when we push on github, while you could simply fix them for further usage.

Bilibilee commented 1 day ago

Hello, I am confused about the inconsistencies between the first training stage and the MLLMSD training stage:

This discrepancy in the number of new tokens causes the MLLMSD model's load_pretrain_MLLM_alignment function to fail.

In the first training stage, the LLama checkpoint is loaded, but in the MLLMSD training stage, the LLava checkpoint is loaded, which is puzzling. Why not directly align LLava with CLIP?"