modelscope / ms-swift

Use PEFT or Full-parameter to finetune 350+ LLMs or 90+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, InternVL2, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, Yi-VL, DeepSeek-VL, Phi3.5-Vision, ...)
https://swift.readthedocs.io/zh-cn/latest/Instruction/index.html
Apache License 2.0
3.65k stars 312 forks source link

SFT训练到一个特定的step后报错 #591

Closed WSC741606 closed 6 months ago

WSC741606 commented 6 months ago

请教一下大佬,自定义数据集混合tulu-v2-sft-mixture微调yi-6b-chat,到一个特定的step(743/3030)报错结束,重复多次都在这里报错(换过不同机子跑,但都是V10083),这可能是什么原因呢?或者怎么提取出对应step的训练行或者直接跳过呢?试过test_oom_error测试确认没有OOM的问题,数据集已采用如下正则表达式过滤去除特殊字符,只留下中英文、数字、希腊字母和标点;

InvaildSymbolPattern=r'[^a-zA-Z0-9\u4e00-\u9fa5\u0370-\u03FF\s\.,;:³?!@ø•Å±≠/\\²#$≈·×≡`°~℃%^&*()_+\-=\[\]{}<>≤≥|,。?!;、:“‘’→←↑↓↔√《》()【】"\']'

数据集是“query,response”的CSV格式,utf-8-sig编码,前面跑过其他同流程制备的自定义数据集没有遇到过类似问题;报错的Traceback如下,不过感觉看不出什么信息

{'loss': 0.86012192, 'acc': 0.77332544, 'grad_norm': 0.27813789, 'learning_rate': 8.43e-06, 'epoch': 0.73, 'global_step': 735}
{'loss': 0.88135662, 'acc': 0.76727462, 'grad_norm': 0.42908424, 'learning_rate': 8.41e-06, 'epoch': 0.73, 'global_step': 740}
Train:  25%|██▍       | 743/3030 [8:56:35<28:32:10, 44.92s/it]Traceback (most recent call last):
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/cli/sft.py", line 5, in <module>
    sft_main()
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/utils/run_utils.py", line 31, in x_main
    result = llm_x(args, **kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/llm/sft.py", line 229, in llm_sft
    trainer.train(training_args.resume_from_checkpoint)
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/trainers/trainers.py", line 50, in train
    super().train(*args, **kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 1624, in train
    return inner_training_loop(
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 1961, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 2911, in training_step
    self.accelerator.backward(loss)
  File "/data/home/user/Test/lib/python3.9/site-packages/accelerate/accelerator.py", line 1999, in backward
    self.scaler.scale(loss).backward(**kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward
    torch.autograd.backward(
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [IP2]:31440
Traceback (most recent call last):
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/cli/sft.py", line 5, in <module>
    sft_main()
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/utils/run_utils.py", line 31, in x_main
    result = llm_x(args, **kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/llm/sft.py", line 229, in llm_sft
    trainer.train(training_args.resume_from_checkpoint)
  File "/data/home/user/Test/lib/python3.9/site-packages/swift/trainers/trainers.py", line 50, in train
    super().train(*args, **kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 1624, in train
    return inner_training_loop(
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 1961, in _inner_training_loop
    tr_loss_step = self.training_step(model, inputs)
  File "/data/home/user/Test/lib/python3.9/site-packages/transformers/trainer.py", line 2911, in training_step
    self.accelerator.backward(loss)
  File "/data/home/user/Test/lib/python3.9/site-packages/accelerate/accelerator.py", line 1999, in backward
    self.scaler.scale(loss).backward(**kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward
    torch.autograd.backward(
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: [../third_party/gloo/gloo/transport/tcp/pair.cc:534] Connection closed by peer [IP0]:26763
[2024-03-23 10:11:58,297] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82642 closing signal SIGTERM
[2024-03-23 10:11:58,299] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82643 closing signal SIGTERM
[2024-03-23 10:11:58,299] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82644 closing signal SIGTERM
[2024-03-23 10:11:58,299] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82645 closing signal SIGTERM
[2024-03-23 10:11:58,300] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82646 closing signal SIGTERM
[2024-03-23 10:11:58,300] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82647 closing signal SIGTERM
[2024-03-23 10:11:58,300] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 82648 closing signal SIGTERM
[2024-03-23 10:12:02,452] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 7 (pid: 82649) of binary: /data/home/user/Test/bin/python3
Traceback (most recent call last):
  File "/data/home/user/Test/bin/torchrun", line 8, in <module>
    sys.exit(main())
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/distributed/run.py", line 806, in main
    run(args)
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/distributed/run.py", line 797, in run
    elastic_launch(
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/data/home/user/Test/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/data/home/user/Test/lib/python3.9/site-packages/swift/cli/sft.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-03-23_10:11:58
  host      : IP0
  rank      : 7 (local_rank: 7)
  exitcode  : 1 (pid: 82649)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

感谢大佬!

WSC741606 commented 6 months ago

训练脚本如下

NNODES=3 \
NODE_RANK=$NODE_RANK \
MASTER_ADDR=$MASTER_ADDR \
MASTER_PORT=$MASTER_PORT \
NPROC_PER_NODE=8 \
swift sft --seed 0\
 --ddp_backend "gloo" \
 --use_flash_attn "False"\
 --sft_type "lora" \
 --dtype "fp16"\
 --neftune_noise_alpha "5"\
 --model_id_or_path $MODEL \
 --model_type yi-6b-chat \
 --template_type AUTO \
 --system $SYSTEM\
 --dataset tulu-v2-sft-mixture \
 --custom_train_dataset_path $DATASET \
 --train_dataset_sample -1\
 --eval_steps "500"\
 --save_steps "500"\
 --check_dataset_strategy 'warning'\
 --lora_target_modules ALL\
 --lora_rank "32"\
 --lora_dtype AUTO \
 --batch_size "1"\
 --learning_rate "1e-5"\
 --max_length "8192"\
 --num_train_epochs "3"\
 --self_cognition_sample "9999"\
 --model_name $NAME\
 --model_author $AUTHOR\
 --warmup_ratio "0.1"\
 --gradient_accumulation_steps 16 \
 --preprocess_num_proc 16\
 --test_oom_error False \
 --add_output_dir_suffix False\
 --save_only_model "False"\
 --output_dir $new_folder --logging_dir $new_folder/runs > $new_folder/runs/run.log 2>&1
WSC741606 commented 6 months ago

绝了,只上自定义的数据集过一个epoch没啥问题,估计是tulu-v2-sft-mixture的锅了

Jintao-Huang commented 6 months ago

你可以试试混合ms-bench数据集

Jintao-Huang commented 6 months ago

自我认知采样不需要那么多

WSC741606 commented 6 months ago

好的,我试试,另外自我认知采样是用固定值还是给和其他数据大概的比例?我拓展了一下自我认知的数据,总共差不多800条

WSC741606 commented 6 months ago

补充一下,混合ms-bench训练没问题