Use PEFT or Full-parameter to finetune 400+ LLMs or 100+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
4.21k
stars
370
forks
source link
longlora finetuning llama3.1-8b-instruct报错positional embeddings #2431
Open
xtchen96 opened 2 days ago
Describe the bug
4xA100 gpu fine-tuning llama-3.1-8b-instruct (also tried llama2-13b-ms, same error), cli
报错如下:
Your hardware and system info
ubuntu 22.04, torch 2.5.1 cuda 12.4
Additional context Add any other context about the problem here(在这里补充其他信息)