hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.29k stars 4.22k forks source link

多机多卡就行sft微调glm4时,出现模型正常训练在进行模型评测时出现错误。torch.distributed.elastic.multiprocessing.errors.ChildFailedError: /home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/llamafactory/launcher.py FAILED #5255

Closed AnnaYanami-8 closed 2 months ago

AnnaYanami-8 commented 2 months ago

image image

{'train_runtime': 23.5066, 'train_samples_per_second': 10.338, 'train_steps_per_second': 1.659, 'train_loss': 1.0768448389493501, 'epoch': 2.89}
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:23<00:00, 1.66it/s] [INFO|trainer.py:3410] 2024-08-23 12:22:35,576 >> Saving model checkpoint to saves/glm4_9b/lora/sft W0823 12:22:35.615000 140348747958080 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 273712 closing signal SIGTERM E0823 12:22:36.795000 140348747958080 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: -9) local_rank: 1 (pid: 273713) of binary: /home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/bin/python Traceback (most recent call last): File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/bin/torchrun", line 8, in sys.exit(main()) ^^^^^^ File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 347, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 132, in call return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

/home/cyber-sec-011/anaconda3/envs/llama_factory_cs2/lib/python3.11/site-packages/llamafactory/launcher.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-08-23_12:22:35 host : cybersec011-System-Product-Name rank : 1 (local_rank: 1) exitcode : -9 (pid: 273713) error_file: traceback : Signal 9 (SIGKILL) received by PID 273713 ============================================================ 使用双3090为主节点,4090为副节点。训练命令,3090:FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=10.188.240.21 MASTER_PORT=20953 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml 4090:FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=10.188.240.21 MASTER_PORT=20953 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml 系统ubuntu22.04,cuda12.4
AnnaYanami-8 commented 2 months ago

偶发性!!! 偶发性!!! 偶发性!!! 屏幕截图 2024-08-23 153501 在多次运行有发现偶发性的出现模型正常保存,保存的模型经过测试可以正常推理,合并。就是不能在上面流程中评估。

后面吧多机多卡的训练中修改llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml中的参数deepspeed: examples/deepspeed/ds_z3_config.json,原ds_z2_config.json。ds_z2数据并行不可评测,ds_z3模型并行可以正常训练评估 屏幕截图 2024-08-23 170227

AnnaYanami-8 commented 2 months ago

ds_z2数据并行不可评测,ds_z3模型并行可以正常训练评估。是否是评测代码出现问题?求回答