hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
30.77k stars 3.79k forks source link

昇腾多卡训练问题 #3810

Open 1737686924 opened 3 months ago

1737686924 commented 3 months ago

Reminder

Reproduction

脚本如下: deepspeed --num_gpus 4 src/train_bash.py \ --deepspeed examples/deepspeed/ds_z3_config.json \ --stage sft \ --do_train \ --model_name_or_path /data/applications/LMD-BF/backend/BaseModels/internlm2-chat-20b-sft/internlm2-chat-20b-sft/ \ --dataset identity \ --template intern2 \ --finetuning_type lora \ --lora_target wqkv \ --output_dir saves/internlm2-chat-20b-sft/lora/sft \ --overwrite_cache true \ --overwrite_output_dir true \ --cutoff_len 1024 \ --preprocessing_num_workers 16 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 100 \ --eval_steps 2 \ --evaluation_strategy steps \ --load_best_model_at_end \ --learning_rate 5e-5 \ --num_train_epochs 10.0 \ --val_size 0.1 \ --ddp_timeout 180000000 \ --plot_loss \ --fp16 stage:0,1,2,3均报错应是爆显存 2ef85aebbc28dedf3a2b03c28471e72

开启offload训练在这一步卡住 ddf0f4dafef1723efcd6720214db913

应该如何解决。

Expected behavior

No response

System Info

No response

Others

No response

297106271 commented 1 month ago

你好 大佬 昇腾 我多卡推理为什么不走NPU呢 有什么配置嘛

mozhu1314 commented 1 week ago

我也是同样的问题,910b2, lora微调57b模型,用了八张卡failed allocate memory, 增加到2机共16张卡,仍然报错并且每张卡内存也没减小?

alf-wangzhi commented 1 week ago

image 请问昇腾多机训练 怎么设置呀 我报这个错误