Use PEFT or Full-parameter to finetune 350+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
3.93k
stars
347
forks
source link
是不是不支持在推理代码里面设置MiniCPM3-4B #2310
Open
learn01one opened 4 hours ago
推理代码设置: model_type = ModelType.MiniCPM3-4B template_type = get_default_template_type(model_type) 好像会报错,现在是否不支持额,谢谢~