Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
4.27k
stars
377
forks
source link
internvl-chat-v1_5 DPO 报错 #1773
Closed
teamwong111 closed 2 months ago
Describe the bug What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
internvl-chat-v1_5 DPO 会报错,可以使用官方示例测试
问题在 swift/llm/utils/model.py 4452 行
internvl-chat-v1_5 config.jsonl 里没有配置 AutoModelForCasualLM,所以会报错