Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
349 stars 19 forks source link

请问如何用Qwen-14B进行重新训练 #11

Closed delltower closed 4 months ago

Coobiw commented 6 months ago

显存够用的话,直接修改config文件里的llm_model项,改成14B的checkpoint路径就可以,我最近也会放一个在2张3090用流水线并行训练的14B版本

Coobiw commented 6 months ago

已更新双卡流水线并行训练Qwen-14B-Chat的版本:commit

Coobiw commented 4 months ago

Qwen-14B pipeline parallel training is supported. I'll close this issue