hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
33.99k stars 4.18k forks source link

PPO阶段与RM阶段使用accelerate训练产生同样错误 #21

Closed WangRongsheng closed 1 year ago

WangRongsheng commented 1 year ago

以下是PPO阶段的错误log,RM产生这个错误可以通过不使用accelerate多卡训练解决:

  1. "transformers_version": "4.29.2"
  2. 训练报错:
    
    [INFO|modeling_utils.py:2513] 2023-06-08 10:26:34,951 >> loading weights file llama-hf/33b-hf/llama-33b-hf/pytorch_model.bin.index.json
    [INFO|modeling_utils.py:1154] 2023-06-08 10:26:34,952 >> Instantiating LlamaForCausalLM model under default dtype torch.float16.
    [INFO|configuration_utils.py:577] 2023-06-08 10:26:34,953 >> Generate config GenerationConfig {
    "_from_model_config": true,
    "bos_token_id": 1,
    "eos_token_id": 2,
    "pad_token_id": 0,
    "transformers_version": "4.29.2"
    }

Loading checkpoint shards: 71%|█████████████████████████████████████████▍ | 5/7 [01:46<00:43, 21.78s/it]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517201 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517202 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517204 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 2 (pid: 517203) of binary: /root/miniconda3/envs/xray/bin/python Traceback (most recent call last): File "/root/miniconda3/envs/xray/bin/accelerate", line 8, in sys.exit(main()) File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main args.func(args) File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 909, in launch_command multi_gpu_launcher(args) File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 604, in multi_gpu_launcher distrib_run.run(args) File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

src/train_ppo.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-06-08_10:28:37 host : mpudgx202302-DGX-Station-A100-920-23487-2531-000 rank : 2 (local_rank: 2) exitcode : -9 (pid: 517203) error_file: traceback : Signal 9 (SIGKILL) received by PID 517203 ============================================================ ``` 3. 我的内存 ``` total used free shared buff/cache available Mem: 503Gi 301Gi 198Gi 32Mi 3.2Gi 199Gi Swap: 0B 0B 0B ``` 4. 训练命令: ```python accelerate launch src/train_ppo.py \ --model_name_or_path llama-hf/33b-hf/llama-33b-hf \ --do_train \ --dataset CCT \ --finetuning_type lora \ --checkpoint_dir sft/ \ --reward_model rm/ \ --output_dir ppo \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 1000 \ --learning_rate 1e-5 \ --num_train_epochs 2.0 \ --resume_lora_training False \ --plot_loss ```
gebilaoman commented 1 year ago

我也是爆这个错误,大佬怎么处理的

hiyouga commented 1 year ago

@gebilaoman CPU 内存是否满足模型加载?