Closed wjyfelicity closed 7 months ago
环境安装: python==3.7.13 torch==1.11.0+cu113 funasr==1.0.15 modelscope==1.9.5 使用模型: [https://www.modelscope.cn/models/damo/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary](url) 参考示例: [https://github.com/alibaba-damo-academy/FunASR/blob/5e7eb6f160c48861cbcd39825a0cb98f98538772/examples/industrial_data_pretraining/seaco_paraformer/finetune_from_local.sh] 完整错误: Traceback (most recent call last): File "../../../funasr/bin/train.py", line 42, in main_hydra main(kwargs) File "../../../funasr/bin/train.py", line 192, in main trainer.run() File "/code/zhili_test/new/FunASR-main/funasr/train_utils/trainer.py", line 181, in run self._train_epoch(epoch) File "/code/zhili_test/new/FunASR-main/funasr/train_utils/trainer.py", line 245, in _train_epoch retval = self.model(batch) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward output = self._run_ddp_forward(*inputs, *kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward return module_to_run(inputs[0], kwargs[0]) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/code/zhili_test/new/FunASR-main/funasr/models/seaco_paraformer/model.py", line 120, in forward assert text_lengths.dim() == 1, text_lengths.shape AssertionError: torch.Size([32, 1])
Ok, we would fix it soon.
Bug has been fixed. Please ref to the docs: https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/tutorial/README_zh.md
环境安装: python==3.7.13 torch==1.11.0+cu113 funasr==1.0.15 modelscope==1.9.5 使用模型: [https://www.modelscope.cn/models/damo/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary](url) 参考示例: [https://github.com/alibaba-damo-academy/FunASR/blob/5e7eb6f160c48861cbcd39825a0cb98f98538772/examples/industrial_data_pretraining/seaco_paraformer/finetune_from_local.sh] 完整错误: Traceback (most recent call last): File "../../../funasr/bin/train.py", line 42, in main_hydra main(kwargs) File "../../../funasr/bin/train.py", line 192, in main trainer.run() File "/code/zhili_test/new/FunASR-main/funasr/train_utils/trainer.py", line 181, in run self._train_epoch(epoch) File "/code/zhili_test/new/FunASR-main/funasr/train_utils/trainer.py", line 245, in _train_epoch retval = self.model(batch) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1040, in forward output = self._run_ddp_forward(*inputs, *kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 1000, in _run_ddp_forward return module_to_run(inputs[0], kwargs[0]) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/code/zhili_test/new/FunASR-main/funasr/models/seaco_paraformer/model.py", line 120, in forward assert text_lengths.dim() == 1, text_lengths.shape AssertionError: torch.Size([32, 1])