Loading checkpoint shards: 71%|█████████████████████████████████████████▍ | 5/7 [01:46<00:43, 21.78s/it]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517201 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517202 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517204 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 2 (pid: 517203) of binary: /root/miniconda3/envs/xray/bin/python
Traceback (most recent call last):
File "/root/miniconda3/envs/xray/bin/accelerate", line 8, in
sys.exit(main())
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 909, in launch_command
multi_gpu_launcher(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 604, in multi_gpu_launcher
distrib_run.run(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
以下是PPO阶段的错误log,RM产生这个错误可以通过不使用accelerate多卡训练解决:
Loading checkpoint shards: 71%|█████████████████████████████████████████▍ | 5/7 [01:46<00:43, 21.78s/it]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517201 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517202 closing signal SIGTERM WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 517204 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 2 (pid: 517203) of binary: /root/miniconda3/envs/xray/bin/python Traceback (most recent call last): File "/root/miniconda3/envs/xray/bin/accelerate", line 8, in
sys.exit(main())
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
args.func(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 909, in launch_command
multi_gpu_launcher(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/accelerate/commands/launch.py", line 604, in multi_gpu_launcher
distrib_run.run(args)
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/miniconda3/envs/xray/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
src/train_ppo.py FAILED
Failures: