Coobiw / MPP-LLaVA

Personal Project: MPP-Qwen14B & MPP-Qwen-Next(Multimodal Pipeline Parallel based on Qwen-LM). Support [video/image/multi-image] {sft/conversations}. Don't let the poverty limit your imagination! Train your own 8B/14B LLaVA-training-like MLLM on RTX3090/4090 24GB.
349 stars 19 forks source link

请问Qwen-7B的权重文件是只需要LFS的吗?还是全部文件都要呢? #16

Closed cszhengyh closed 4 months ago

Coobiw commented 4 months ago

全部文件,如果你的机器可以正常访问huggingface,可以使用

# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True,cache_dir=your_path)

会自动下载到你指定的cache_dir

参考:

image
Coobiw commented 4 months ago

Has been solved