Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
385 stars 20 forks source link

是否支持QWEN-14B的INT4的量化版本? #18

Closed yumianhuli2 closed 6 months ago

yumianhuli2 commented 6 months ago

是否支持QWEN-14B的INT4的量化版本?谢谢!

Coobiw commented 6 months ago

可以比较容易的支持进去 你可以更改config文件中的llm_modelhttps://github.com/Coobiw/MiniGPT4Qwen/blob/master/lavis/configs/models/minigpt4qwen/minigpt4qwen-14b.yaml#L34

然后可以下载Qwen-14B-Int4的权重,把绝对路径写到这里就行了

如果训练的话 可以在lavis/projects/[your_dir]/[your_config].yaml里加上llm_model,指定你的int4模型路径即可

yumianhuli2 commented 6 months ago

可以比较容易的支持进去 你可以更改config文件中的llm_modelhttps://github.com/Coobiw/MiniGPT4Qwen/blob/master/lavis/configs/models/minigpt4qwen/minigpt4qwen-14b.yaml#L34

然后可以下载Qwen-14B-Int4的权重,把绝对路径写到这里就行了

如果训练的话 可以在lavis/projects/[your_dir]/[your_config].yaml里加上llm_model,指定你的int4模型路径即可

好的,3Q

Coobiw commented 6 months ago

solved. I'll close this issue.