FlagAI-Open / Aquila2

The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.
436 stars 29 forks source link

7b模型qlora sft出错,transformers Version: 4.35.0 ,错误信息见内 #125

Closed tfal-yan closed 11 months ago

tfal-yan commented 11 months ago

^MLoading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]^MLoading checkpoint shards: 33%|███▎ | 1/3 [00:13<00:27, 13.84s/it]^MLoading checkpoint shards: 67%|██████▋ | 2/3 [00:25<00:12, 12.46s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:35<00:00, 11.43s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:35<00:00, 11.85s/it] Traceback (most recent call last): File "/data0/testCase/Aquila2/finetune/finetune.py", line 481, in train() File "/data0/testCase/Aquila2/finetune/finetune.py", line 399, in train model = prepare_model_for_kbit_training( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/peft/utils/other.py", line 130, in prepare_model_for_kbit_training model.gradient_checkpointing_enable(*gc_enable_kwargs) File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/transformers-4.35.0-py3.11.egg/transformers/modeling_utils.py", line 1872, in gradient_checkpointing_enable self._set_gradient_checkpointing(enable=True, gradient_checkpointing_func=gradient_checkpointing_func) TypeError: AquilaPreTrainedModel._set_gradient_checkpointing() got an unexpected keyword argument 'enable' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2585097) of binary: /root/anaconda3/envs/testCase/bin/python Traceback (most recent call last): File "/root/anaconda3/envs/testCase/bin/torchrun", line 8, in sys.exit(main()) ^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

ftgreat commented 11 months ago

建议回退transformers版本到4.31.0,是aquila2支持的版本。https://github.com/FlagAI-Open/FlagAI/issues/556

由于 transformers 更新版本会不兼容,可以删除缺少的函数做下调整。

tfal-yan commented 11 months ago

[2023-11-30 10:55:28,139] [INFO] [logger.py:85:log_dist] [Rank 0] Unsupported bmtrain ^MLoading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]^MLoading checkpoint shards: 33%|███▎ | 1/3 [00:08<00:17, 8.97s/it]^MLoading checkpoint shards: 67%|██████▋ | 2/3 [00:16<00:08, 8.12s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:23<00:00, 7.60s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:23<00:00, 7.82s/it] trainable params: 4,194,304 || all params: 7,299,731,456 || trainable%: 0.05745833288911608 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 2593016) of binary: /root/anaconda3/envs/testCase/bin/python Traceback (most recent call last): File "/root/anaconda3/envs/testCase/bin/torchrun", line 8, in sys.exit(main()) ^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

/data0/testCase/Aquila2/finetune/finetune.py FAILED

Failures:

--------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-11-30_10:55:58 host : rank : 0 (local_rank: 0) exitcode : -11 (pid: 2593016) error_file: traceback : Signal 11 (SIGSEGV) received by PID 2593016 ========================================================= ~ "log.txt.127.0.0.1" 39L, 2733C bash脚本:其他都是默认 for ip in `cat ${HOSTFILE} | cut -d " " -f1` do echo "Starting node ${i}/${NNodes}: ${ip}" ssh $ip eval \ "export CUDA_VISIBLE_DEVICES="4,3,2,1,0" && \ source /root/anaconda3/etc/profile.d/conda.sh && \ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/root/anaconda3/lib/ && \ export CUDA_LAUNCH_BLOCKING=1 && \ conda init bash && \ conda activate testCase && \ cd ${PWD} && \ export PYTHONPATH=${PYTHONPATH}:. && \ export WANDB_MODE=offline && \ torchrun \ --nnodes=${NNodes} \ --node_rank=${i} \ --nproc_per_node=1 \ --master_addr=${MASTER_ADDR} \ --master_port=20001 \ $AQUILA2_HOME/finetune/finetune.py \ --model_dir $CKPT_INPUT \ --model_name $MODEL_NAME_INPUT \ --data_path $DATA_FILE \ --use_lora True \ --q_lora True \ --lora_r 8 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --convo_template $CONVO_TEMPLATE \ --fp16 \ --model_max_length 2048 \ --output_dir $CKPT_OUTPUT/$MODEL_NAME_OUTPUT \ --num_train_epochs $EPOCHS \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 1 \ --evaluation_strategy no \ --eval_steps 1500 \ --save_strategy 'epoch' \ --save_steps 2000 \ hostfile: 127.0.0.1 slots=5 bash finetune/7B/finetune_qlora.sh
ftgreat commented 11 months ago

--nproc_per_node=1 修改成卡数?

tfal-yan commented 11 months ago

--nproc_per_node=2 GPU也配置4,3 2个,还是失败 WARNING:torch.distributed.run:


Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.


[2023-11-30 11:38:55,548] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-11-30 11:38:55,591] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2023-11-30 11:38:58,382] [INFO] [comm.py:637:init_distributed] cdb=None [2023-11-30 11:38:58,382] [INFO] [comm.py:668:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl [2023-11-30 11:38:58,431] [INFO] [comm.py:637:init_distributed] cdb=None [2023-11-30 11:38:58,442] [INFO] [logger.py:85:log_dist] [Rank 0] Unsupported bmtrain ^MLoading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]^MLoading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]^MLoading checkpoint shards: 33%|███▎ | 1/3 [00:12<00:24, 12.02s/it]^MLoading checkpoint shards: 33%|███▎ | 1/3 [00:12<00:24, 12.04s/it]^MLoading checkpoint shards: 67%|██████▋ | 2/3 [00:22<00:10, 10.97s/it]^MLoading checkpoint shards: 67%|██████▋ | 2/3 [00:22<00:10, 10.98s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:36<00:00, 12.48s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:36<00:00, 12.18s/it] ^MLoading checkpoint shards: 100%|██████████| 3/3 [00:36<00:00, 12.48s/it]^MLoading checkpoint shards: 100%|██████████| 3/3 [00:36<00:00, 12.18s/it] trainable params: 4,194,304 || all params: 7,299,731,456 || trainable%: 0.05745833288911608 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 2595541) of binary: /root/anaconda3/envs/testCase/bin/python Traceback (most recent call last): File "/root/anaconda3/envs/testCase/bin/torchrun", line 8, in sys.exit(main()) ^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

/data0/testCase/Aquila2/finetune/finetune.py FAILED

Failures: [1]: time : 2023-11-30_11:39:43

ftgreat commented 11 months ago

只有一台机器可以先试试这个 https://github.com/FlagAI-Open/Aquila2/blob/main/finetune/7B/finetune_qlora_single_node.sh https://github.com/FlagAI-Open/Aquila2/issues/124

感觉跟pytorch版本没关系。

ftgreat commented 11 months ago

export CUDA_VISIBLE_DEVICES="4,3,2,1,0"; bash finetune/7B/finetune_qlora_single_node.sh

tfal-yan commented 11 months ago

2023-11-30 14:14:03,335] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Traceback (most recent call last): File "/data0/testCase/Aquila2/finetune/finetune.py", line 481, in train() File "/data0/testCase/Aquila2/finetune/finetune.py", line 350, in train ) = parser.parse_args_into_dataclasses() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/transformers/hf_argparser.py", line 347, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--use_single_node', 'True']

以下文件也需要同步更新吗: Aquila2/finetune

/finetune.py ~

ftgreat commented 11 months ago

2023-11-30 14:14:03,335] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Traceback (most recent call last): File "/data0/testCase/Aquila2/finetune/finetune.py", line 481, in train() File "/data0/testCase/Aquila2/finetune/finetune.py", line 350, in train ) = parser.parse_args_into_dataclasses() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/testCase/lib/python3.11/site-packages/transformers/hf_argparser.py", line 347, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['--use_single_node', 'True']

以下文件也需要同步更新吗: Aquila2/finetune

/finetune.py ~

是的,支持了新的开关。

tfal-yan commented 11 months ago

已经ok了,34B模型用qlora,单机脚本,多谢