Closed flowbywind closed 5 months ago
运行脚本如下: deepspeed --num_gpus 7 src/train_bash.py --deepspeed config/deepspeed/ds_config.json --ddp_timeout 180000000 --stage sft --do_train True --model_name_or_path /hy-tmp/LLM/models/Qwen1.5-14B-Chat --dataset_dir data --dataset yuansheng_sft_zh --template qwen --finetuning_type full --output_dir saves/Qwen1.5-14B-Chat/full/train_qwen14_full_kefu_2024-04-02-23-01 --cutoff_len 1024 --learning_rate 1e-04 --num_train_epochs 3.0 --max_samples 100000 --per_device_train_batch_size 4 -gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --max_steps 3000 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 0 --val_size 0.1 --evaluation_strategy steps --eval_steps 100 --per_device_eval_batch_size 2 --load_best_model_at_end True --plot_loss True --fp16 True 报错命令如下: File "/root/autodl-tmp/LLM/LLaMA-Factory/src/llmtuner/hparams/parser.py", line 47, in _parse_args (*parsed_args, unknown_args) = parser.parse_args_into_dataclasses(return_remaining_strings=True) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses obj = dtype(**inputs) File "", line 129, in init File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 1551, in post_init__ and (self.device.type != "cuda") File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 2027, in device return self._setup_devices File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/utils/generic.py", line 63, in get__ cached = self.fget(obj) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 1959, in _setup_devices self.distributed_state = PartialState(timeout=timedelta(seconds=self.ddp_timeout)) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/accelerate/state.py", line 214, in init raise NotImplementedError( NotImplementedError: Using RTX 4000 series doesn't support faster communication broadband via P2P or IB. Please set NCCL_P2P_DISABLE="1" and NCCL_IB_DISABLE="1" or useaccelerate launch` which will do this automatically.
NCCL_P2P_DISABLE="1"
NCCL_IB_DISABLE="1" or use
No response
看报错信息
Reminder
Reproduction
运行脚本如下: deepspeed --num_gpus 7 src/train_bash.py --deepspeed config/deepspeed/ds_config.json --ddp_timeout 180000000 --stage sft --do_train True --model_name_or_path /hy-tmp/LLM/models/Qwen1.5-14B-Chat --dataset_dir data --dataset yuansheng_sft_zh --template qwen --finetuning_type full --output_dir saves/Qwen1.5-14B-Chat/full/train_qwen14_full_kefu_2024-04-02-23-01 --cutoff_len 1024 --learning_rate 1e-04 --num_train_epochs 3.0 --max_samples 100000 --per_device_train_batch_size 4 -gradient_accumulation_steps 4 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 5 --max_steps 3000 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 0 --val_size 0.1 --evaluation_strategy steps --eval_steps 100 --per_device_eval_batch_size 2 --load_best_model_at_end True --plot_loss True --fp16 True 报错命令如下: File "/root/autodl-tmp/LLM/LLaMA-Factory/src/llmtuner/hparams/parser.py", line 47, in _parse_args (*parsed_args, unknown_args) = parser.parse_args_into_dataclasses(return_remaining_strings=True) File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses obj = dtype(**inputs) File "", line 129, in init
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 1551, in post_init__
and (self.device.type != "cuda")
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 2027, in device
return self._setup_devices
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/utils/generic.py", line 63, in get__
cached = self.fget(obj)
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/transformers/training_args.py", line 1959, in _setup_devices
self.distributed_state = PartialState(timeout=timedelta(seconds=self.ddp_timeout))
File "/root/miniconda3/envs/llm/lib/python3.9/site-packages/accelerate/state.py", line 214, in init
raise NotImplementedError(
NotImplementedError: Using RTX 4000 series doesn't support faster communication broadband via P2P or IB. Please set
NCCL_P2P_DISABLE="1"
andNCCL_IB_DISABLE="1" or use
accelerate launch` which will do this automatically.Expected behavior
No response
System Info
No response
Others
No response