A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
Error log
[chushaobo] 2023-08-16 16:59:48,407 (build_trainer:677) INFO: Scheduler: [WarmupLR(warmup_steps=30000)]
[chushaobo] 2023-08-16 16:59:48,407 (build_trainer:683) INFO: Saving the configuration in ./checkpoint/config.yaml
[chushaobo] 2023-08-16 16:59:48,740 (build_trainer:692) INFO: Loading pretrained params from ./models_from_modelscope/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model.pb
Traceback (most recent call last):
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/utils/registry.py", line 212, in build_from_cfg
return obj_cls(**args)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/trainers/audio/asr_trainer.py", line 102, in init
self.trainer = build_trainer.build_trainer(
File "/home/chushaobo/project/FunASR/funasr/bin/build_trainer.py", line 704, in build_trainer
train_dataloader, valid_dataloader = build_dataloader(args)
File "/home/chushaobo/project/FunASR/funasr/build_utils/build_dataloader.py", line 20, in build_dataloader
train_iter_factory = SequenceIterFactory(args, mode="train")
File "/home/chushaobo/project/FunASR/funasr/datasets/small_datasets/sequence_iter_factory.py", line 71, in init
min_batch_size=torch.distributed.get_world_size(),
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1067, in get_world_size
return _get_group_size(group)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 453, in _get_group_size
default_pg = _get_default_group()
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 584, in _get_default_group
raise RuntimeError(
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/chushaobo/project/FunASR/mytrain.py", line 88, in
modelscope_finetune(params)
File "/home/chushaobo/project/FunASR/mytrain.py", line 72, in modelscope_finetune
trainer = build_trainer(Trainers.speech_asr_trainer, default_args=kwargs)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/trainers/builder.py", line 39, in build_trainer
return build_from_cfg(cfg, TRAINERS, default_args=default_args)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/utils/registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
RuntimeError: ASRTrainer: Default process group has not been initialized, please make sure to call init_process_group.
OS: ubuntu20.04 Python/C++ Version:python3.9 Package Version:torch-1.13.1+cu117、torchaudio-0.13.1+cu117、modelscope-1.8.、funasr-0.7.4(pip list) Model:speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch Command: python finetune.py Details:单机 GPU 训练
Error log [chushaobo] 2023-08-16 16:59:48,407 (build_trainer:677) INFO: Scheduler: [WarmupLR(warmup_steps=30000)] [chushaobo] 2023-08-16 16:59:48,407 (build_trainer:683) INFO: Saving the configuration in ./checkpoint/config.yaml [chushaobo] 2023-08-16 16:59:48,740 (build_trainer:692) INFO: Loading pretrained params from ./models_from_modelscope/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model.pb Traceback (most recent call last): File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/utils/registry.py", line 212, in build_from_cfg return obj_cls(**args) File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/trainers/audio/asr_trainer.py", line 102, in init self.trainer = build_trainer.build_trainer( File "/home/chushaobo/project/FunASR/funasr/bin/build_trainer.py", line 704, in build_trainer train_dataloader, valid_dataloader = build_dataloader(args) File "/home/chushaobo/project/FunASR/funasr/build_utils/build_dataloader.py", line 20, in build_dataloader train_iter_factory = SequenceIterFactory(args, mode="train") File "/home/chushaobo/project/FunASR/funasr/datasets/small_datasets/sequence_iter_factory.py", line 71, in init min_batch_size=torch.distributed.get_world_size(), File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 1067, in get_world_size return _get_group_size(group) File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 453, in _get_group_size default_pg = _get_default_group() File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 584, in _get_default_group raise RuntimeError( RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/chushaobo/project/FunASR/mytrain.py", line 88, in
modelscope_finetune(params)
File "/home/chushaobo/project/FunASR/mytrain.py", line 72, in modelscope_finetune
trainer = build_trainer(Trainers.speech_asr_trainer, default_args=kwargs)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/trainers/builder.py", line 39, in build_trainer
return build_from_cfg(cfg, TRAINERS, default_args=default_args)
File "/home/chushaobo/anaconda3/envs/funasr/lib/python3.9/site-packages/modelscope/utils/registry.py", line 215, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
RuntimeError: ASRTrainer: Default process group has not been initialized, please make sure to call init_process_group.