modelscope / FunASR

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
https://www.funasr.com
Other
4.62k stars 514 forks source link

多卡训练报错 #1159

Open dsh54054 opened 7 months ago

dsh54054 commented 7 months ago

CUDA_VISIBLE_DEVICES=6,7 python -m torch.distributed.launch --nproc_per_node 2 --master_port=29501 finetune.py 多卡finetune damo/speech_paraformer_asr_nat-zh-cn-16k-common-vocab8404-online 模型时报错:

Task related config: error: unrecognized arguments: --local-rank=0 usage: Task related config [-h] [--config CONFIG] [--frontend {default,sliding_window,s3prl,fused,wav_frontend,multichannelfrontend}] [--frontend_conf FRONTEND_CONF] [--specaug {specaug,specaug_lfr,None}] [--specaug_conf SPECAUG_CONF] [--normalize {global_mvn,utterance_mvn,None}] [--normalize_conf NORMALIZE_CONF] [--model {asr,uniasr,paraformer,paraformer_online,paraformer_bert,bicif_paraformer,contextual_paraformer,neatcontextual_paraformer,mfcca,timestamp_prediction,rnnt,rnnt_unified,sa_asr,bat}] [--model_conf MODEL_CONF] [--encoder {conformer,transformer,rnn,sanm,sanm_chunk_opt,data2vec_encoder,branchformer,e_branchformer,mfcca_enc,chunk_conformer,rwkv}] [--encoder_conf ENCODER_CONF] [--decoder {transformer,lightweight_conv,lightweight_conv2d,dynamic_conv,dynamic_conv2d,rnn,fsmn_scama_opt,paraformer_decoder_sanm,paraformer_decoder_san,contextual_paraformer_decoder,sa_decoder}] [--decoder_conf DECODER_CONF] [--predictor {cif_predictor,ctc_predictor,cif_predictor_v2,cif_predictor_v3,bat_predictor,None}] [--predictor_conf PREDICTOR_CONF] [--encoder2 {conformer,transformer,rnn,sanm,sanm_chunk_opt}] [--encoder2_conf ENCODER2_CONF] [--decoder2 {transformer,lightweight_conv,lightweight_conv2d,dynamic_conv,dynamic_conv2d,rnn,fsmn_scama_opt,paraformer_decoder_sanm}] [--decoder2_conf DECODER2_CONF] [--predictor2 {cif_predictor,ctc_predictor,cif_predictor_v2,None}] [--predictor2_conf PREDICTOR2_CONF] [--stride_conv {stride_conv1d,None}] [--stride_conv_conf STRIDE_CONV_CONF] [--rnnt_decoder {rnnt,None}] [--rnnt_decoder_conf RNNT_DECODER_CONF] [--joint_network {joint_network,None}] [--joint_network_conf JOINT_NETWORK_CONF] [--asr_encoder {conformer,transformer,rnn,sanm,sanm_chunk_opt,data2vec_encoder,mfcca_enc}] [--asr_encoder_conf ASR_ENCODER_CONF] [--spk_encoder {resnet34_diar}] [--spk_encoder_conf SPK_ENCODER_CONF] [--split_with_space SPLIT_WITH_SPACE] [--seg_dict_file SEG_DICT_FILE] [--input_size INPUT_SIZE] [--ctc_conf CTC_CONF] [--cmvn_file CMVN_FILE] [--output_dir OUTPUT_DIR] [--ngpu NGPU] [--seed SEED] [--task_name TASK_NAME] [--dist_backend DIST_BACKEND] [--dist_init_method DIST_INIT_METHOD] [--dist_world_size DIST_WORLD_SIZE] [--dist_rank DIST_RANK] [--local_rank LOCAL_RANK] [--dist_master_addr DIST_MASTER_ADDR] [--dist_master_port DIST_MASTER_PORT] [--dist_launcher {slurm,mpi,None}] [--multiprocessing_distributed MULTIPROCESSING_DISTRIBUTED] [--unused_parameters UNUSED_PARAMETERS] [--gpu_id GPU_ID] [--cudnn_enabled CUDNN_ENABLED] [--cudnn_benchmark CUDNN_BENCHMARK] [--cudnn_deterministic CUDNN_DETERMINISTIC] [--max_epoch MAX_EPOCH] [--max_update MAX_UPDATE] [--batch_interval BATCH_INTERVAL] [--patience PATIENCE] [--val_scheduler_criterion VAL_SCHEDULER_CRITERION VAL_SCHEDULER_CRITERION] [--early_stopping_criterion EARLY_STOPPING_CRITERION EARLY_STOPPING_CRITERION EARLY_STOPPING_CRITERION] [--best_model_criterion BEST_MODEL_CRITERION [BEST_MODEL_CRITERION ...]] [--keep_nbest_models KEEP_NBEST_MODELS [KEEP_NBEST_MODELS ...]] [--nbest_averaging_interval NBEST_AVERAGING_INTERVAL] [--grad_clip GRAD_CLIP] [--grad_clip_type GRAD_CLIP_TYPE] [--grad_noise GRAD_NOISE] [--accum_grad ACCUM_GRAD] [--resume RESUME] [--train_dtype {float16,float32,float64}] [--use_amp USE_AMP] [--log_interval LOG_INTERVAL] [--use_tensorboard USE_TENSORBOARD] [--init_param INIT_PARAM] [--ignore_init_mismatch IGNORE_INIT_MISMATCH] [--freeze_param FREEZE_PARAM] [--dataset_type DATASET_TYPE] [--dataset_conf DATASET_CONF] [--data_dir DATA_DIR] [--train_set TRAIN_SET] [--valid_set VALID_SET] [--data_file_names DATA_FILE_NAMES] [--speed_perturb SPEED_PERTURB [SPEED_PERTURB ...]] [--use_preprocessor USE_PREPROCESSOR] [--optim OPTIM] [--optim_conf OPTIM_CONF] [--scheduler SCHEDULER] [--scheduler_conf SCHEDULER_CONF] [--init {chainer,xavier_uniform,xavier_normal,kaiming_uniform,kaiming_normal,None}] [--token_list TOKEN_LIST] [--token_type {bpe,char,word}] [--bpemodel BPEMODEL] [--cleaner {None,tacotron,jaconv,vietnamese}] [--g2p {None,g2p_en,g2p_en_no_space,pyopenjtalk,pyopenjtalk_kana,pyopenjtalk_accent,pyopenjtalk_accent_with_pause,pyopenjtalk_prosody,pypinyin_g2p,pypinyin_g2p_phone,espeak_ng_arabic,espeak_ng_german,espeak_ng_french,espeak_ng_spanish,espeak_ng_russian,espeak_ng_greek,espeak_ng_finnish,espeak_ng_hungarian,espeak_ng_dutch,espeak_ng_english_us_vits,espeak_ng_hindi,g2pk,g2pk_no_space,korean_jaso,korean_jaso_no_space}] [--use_pai USE_PAI] [--simple_ddp SIMPLE_DDP] [--num_worker_count NUM_WORKER_COUNT] [--access_key_id ACCESS_KEY_ID] [--access_key_secret ACCESS_KEY_SECRET] [--endpoint ENDPOINT] [--bucket_name BUCKET_NAME] [--oss_bucket OSS_BUCKET] [--enable_lora ENABLE_LORA] [--lora_bias LORA_BIAS] Task related config: error: unrecognized arguments: --local-rank=1 [2023-12-08 13:54:30,786] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 2) local_rank: 0 (pid: 683876) of binary: /code/dengshuhao/local/miniconda3/envs/funasr/bin/python Traceback (most recent call last): File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in main() File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main launch(args) File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch run(args) File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/code/dengshuhao/local/miniconda3/envs/funasr/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

finetune.py FAILED

Failures: [1]: time : 2023-12-08_13:54:30 host : nbser-pusongbai5-gpu-0 rank : 1 (local_rank: 1) exitcode : 2 (pid: 683877) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure): [0]: time : 2023-12-08_13:54:30 host : nbser-pusongbai5-gpu-0 rank : 0 (local_rank: 0) exitcode : 2 (pid: 683876) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

wuxiuzhi738 commented 7 months ago

关注一下内存,可能是内存超了。

C-rawler commented 3 months ago

我也碰上了跟题主一样的报错,请问有人解决了modelscope多卡训练的问题吗,还是说是环境问题 Task related config: error: unrecognized arguments: --local-rank=0 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 185461) of binary: /opt/conda/envs/modelscope/bin/python Traceback (most recent call last): File "/opt/conda/envs/modelscope/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/modelscope/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in main() File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main launch(args) File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch run(args) File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/envs/modelscope/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

finetune.py FAILED

Failures: [1]: time : 2024-03-20_17:56:22 host : jxgd-R5300-G5 rank : 1 (local_rank: 1) exitcode : 2 (pid: 185462) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure): [0]: time : 2024-03-20_17:56:22 host : jxgd-R5300-G5 rank : 0 (local_rank: 0) exitcode : 2 (pid: 185461) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

我的执行命令:CUDA_VISIBLE_DEVICES=2,3 python -m torch.distributed.launch --nproc_per_node 2 finetune.py