A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
See error
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 58, in
main()
File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 51, in main
export_model.export(
File "/workspace/FunASR/funasr/auto/auto_model.py", line 613, in export
export_dir = export_utils.export(model=model, data_in=data_list, kwargs)
File "/workspace/FunASR/funasr/utils/export_utils.py", line 36, in export
_bladedisc_opt_for_encdec(m, path=export_dir, enable_fp16=True)
File "/workspace/FunASR/funasr/utils/export_utils.py", line 195, in _bladedisc_opt_for_encdec
model_script = torch.jit.trace(model, input_data)
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 750, in trace
return trace_module(
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 967, in trace_module
module._c._create_method_from_trace(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
result = self.forward(input, kwargs)
File "/workspace/FunASR/funasr/models/seaco_paraformer/export_meta.py", line 134, in export_backbone_forward
decoder_out, decoderhidden, = self.decoder(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1107, in _slow_forward
return self.forward(input, **kwargs)
RuntimeError: forward() expected at most 5 argument(s) but received 7 argument(s). Declaration: forward(torch.funasr.models.paraformer.decoder.___torch_mangle_1587.
ParaformerSANMDecoderExport self, Tensor hs_pad, Tensor hlens, Tensor ys_in_pad, Tensor ys_in_lens) -> ((Tensor, Tensor))
Notice: In order to resolve issues more efficiently, please raise issue following the template. (注意:为了更加高效率解决您遇到的问题,请按照模板提问,补充细节)
🐛 Bug
使用docker拉取registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1官方镜像 ` sudo docker pull \ registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1 mkdir -p ./funasr-runtime-resources/models sudo docker run --gpus=all -p 10098:10095 -it --privileged=true \ -v $PWD/funasr-runtime-resources/models:/workspace/models \ registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1
cd FunASR/runtime nohup bash run_server.sh \ --download-model-dir /workspace/models \ --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \ --model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch \ --punc-dir damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx \ --lm-dir damo/speech_ngram_lm_zh-cn-ai-wesp-fst \ --itn-dir thuduj12/fst_itn_zh \ --hotword /workspace/models/hotwords.txt > log.txt 2>&1 &
启动 damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch、damo/speech_paraformer-large-contextual_asr_nat-zh-cn-16k-common-vocab8404都能正常起来服务,启动seaco_paraformer热词模型时,指定 --model-dir iic/speech_seaco_paraformer_large_asr_nat-zh-cn-16k-common-vocab8404-pytorch, 报错,
`
To Reproduce
Expected behavior
正常推理
Environment
registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.1.1官方镜像
Additional context