Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
349 stars 19 forks source link

和千问VL做过比较吗? #6

Closed FoolishMao closed 5 months ago

FoolishMao commented 9 months ago

和千问VL做过比较吗?

Coobiw commented 9 months ago

比性能的话,不是一个level的,Qwen-VL性能很好,第二个stage是全参数训练,数据量也比较大。如果用更大的语言模型 + 更多的数据可能还能稍微拼拼吧,具体还没有尝试