modelscope / FunASR

A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
https://www.funasr.com
Other
6.95k stars 738 forks source link

docker在容器内启动服务是报错 #2152

Closed xiasi0 closed 4 weeks ago

xiasi0 commented 4 weeks ago

请看一下日志我错过了什么?谢谢 根据此部署指南:https://github.com/modelscope/FunASR/blob/main/runtime/docs/SDK_advanced_guide_offline_gpu.md 执行如下命令:

docker pull registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.2.0
docker run --gpus=all -p 10098:10095 -it --privileged=true -v /root:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-gpu-0.2.0
cd /workspace/FunASR/runtime
nohup bash run_server.sh \
  --download-model-dir /workspace/models \
  --model-dir damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch \
  --vad-dir damo/speech_fsmn_vad_zh-cn-16k-common-onnx \
  --punc-dir damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx \
  --itn-dir thuduj12/fst_itn_zh \
  --lm-dir damo/speech_ngram_lm_zh-cn-ai-wesp-fst \
  --certfile  ../../../ssl_key/server.crt \
  --keyfile ../../../ssl_key/server.key \
  --hotword ../../hotwords.txt > log.txt 2>&1 &

输入如下日志:

UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  fp16_scale = int(2 * absmax // 65536)
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 58, in <module>
    main()
  File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 51, in main
    export_model.export(
  File "/workspace/FunASR/funasr/auto/auto_model.py", line 662, in export
    export_dir = export_utils.export(model=model, data_in=data_list, **kwargs)
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 36, in export
    _bladedisc_opt_for_encdec(m, path=export_dir, enable_fp16=True)
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 199, in _bladedisc_opt_for_encdec
    model.encoder = _bladedisc_opt(model.encoder, input_data[:2])
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 123, in _bladedisc_opt
    opt_model = torch_blade.optimize(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/optimization.py", line 111, in optimize
    return _optimize(model, allow_tracing, model_inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/optimization.py", line 36, in _optimize
    optimized_model = export(model, allow_tracing, model_inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 244, in export
    _model = _deepcopy(model)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 113, in _deepcopy
    _model = _deep_copy_script_module(model)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 106, in _deep_copy_script_module
    _model = copy.deepcopy(model)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 153, in deepcopy
    y = copier(memo)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parameter.py", line 56, in __deepcopy__
    result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.67 GiB total capacity; 1.31 GiB already allocated; 2.94 MiB free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
rescale encoder modules with factor=11
I20241016 15:11:17.138350  1304 funasr-wss-server.cpp:308] Failed to download model from modelscope. If you set local asr model path, you can ignore the errors.
E20241016 15:11:17.138388  1304 funasr-wss-server.cpp:312] /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript do not exists.
I20241016 15:14:40.127483  1475 funasr-wss-server.cpp:25] model-dir : damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch
I20241016 15:14:40.127522  1475 funasr-wss-server.cpp:25] quantize : true
I20241016 15:14:40.127530  1475 funasr-wss-server.cpp:25] bladedisc : true
I20241016 15:14:40.127537  1475 funasr-wss-server.cpp:25] vad-dir : damo/speech_fsmn_vad_zh-cn-16k-common-onnx
I20241016 15:14:40.127542  1475 funasr-wss-server.cpp:25] vad-quant : true
I20241016 15:14:40.127547  1475 funasr-wss-server.cpp:25] punc-dir : damo/punc_ct-transformer_cn-en-common-vocab471067-large-onnx
I20241016 15:14:40.127552  1475 funasr-wss-server.cpp:25] punc-quant : true
I20241016 15:14:40.127559  1475 funasr-wss-server.cpp:25] itn-dir : thuduj12/fst_itn_zh
I20241016 15:14:40.127564  1475 funasr-wss-server.cpp:25] lm-dir : damo/speech_ngram_lm_zh-cn-ai-wesp-fst
I20241016 15:14:40.127570  1475 funasr-wss-server.cpp:25] hotword : /workspace/FunASR/runtime/websocket/hotwords.txt
I20241016 15:14:40.127576  1475 funasr-wss-server.cpp:25] model-revision : v2.0.5
I20241016 15:14:40.127583  1475 funasr-wss-server.cpp:25] vad-revision : v2.0.6
I20241016 15:14:40.127588  1475 funasr-wss-server.cpp:25] punc-revision : v2.0.5
I20241016 15:14:40.127594  1475 funasr-wss-server.cpp:25] itn-revision : v1.0.1
I20241016 15:14:40.127600  1475 funasr-wss-server.cpp:25] lm-revision : v1.0.2
I20241016 15:14:40.127609  1475 funasr-wss-server.cpp:216] Download model: damo/speech_fsmn_vad_zh-cn-16k-common-onnx from modelscope: 
/usr/lib/python3.8/runpy.py:127: RuntimeWarning: 'funasr.download.runtime_sdk_download_tool' found in sys.modules after import of package 'funasr.download', but prior to execution of 'funasr.download.runtime_sdk_download_tool'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))
2024-10-16 15:14:41,437 - modelscope - INFO - PyTorch version 1.12.0+cu113 Found.
2024-10-16 15:14:41,438 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2024-10-16 15:14:41,454 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 4f2c0bc05f23826fce200b3af6de22cf and a total number of 980 components indexed
2024-10-16 15:14:41,747 - modelscope - INFO - Use user-specified model revision: v2.0.6
Notice: ffmpeg is not installed. torchaudio is used to load audio
If you want to use ffmpeg backend to load audio, please install it by:
    sudo apt install ffmpeg # ubuntu
    # brew install ffmpeg # mac
transformer is not installed, please install it if you want to use related modules
I20241016 15:14:42.130971  1475 funasr-wss-server.cpp:235] Set vad-dir : /workspace/models/damo/speech_fsmn_vad_zh-cn-16k-common-onnx
I20241016 15:14:42.131067  1475 funasr-wss-server.cpp:289] Download model: damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch from modelscope: 
/usr/lib/python3.8/runpy.py:127: RuntimeWarning: 'funasr.download.runtime_sdk_download_tool' found in sys.modules after import of package 'funasr.download', but prior to execution of 'funasr.download.runtime_sdk_download_tool'; this may result in unpredictable behaviour
  warn(RuntimeWarning(msg))
2024-10-16 15:14:43,400 - modelscope - INFO - PyTorch version 1.12.0+cu113 Found.
2024-10-16 15:14:43,401 - modelscope - INFO - Loading ast index from /root/.cache/modelscope/ast_indexer
2024-10-16 15:14:43,417 - modelscope - INFO - Loading done! Current index file version is 1.15.0, with md5 4f2c0bc05f23826fce200b3af6de22cf and a total number of 980 components indexed
2024-10-16 15:14:43,843 - modelscope - INFO - Use user-specified model revision: v2.0.5
2024-10-16 15:14:44,649 - modelscope - WARNING - Using the master branch is fragile, please use it with caution!
2024-10-16 15:14:44,649 - modelscope - INFO - Use user-specified model revision: master
/workspace/FunASR/funasr/utils/export_utils.py:161: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  fp16_scale = int(2 * absmax // 65536)
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 58, in <module>
    main()
  File "/workspace/FunASR/funasr/download/runtime_sdk_download_tool.py", line 51, in main
    export_model.export(
  File "/workspace/FunASR/funasr/auto/auto_model.py", line 662, in export
    export_dir = export_utils.export(model=model, data_in=data_list, **kwargs)
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 36, in export
    _bladedisc_opt_for_encdec(m, path=export_dir, enable_fp16=True)
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 199, in _bladedisc_opt_for_encdec
    model.encoder = _bladedisc_opt(model.encoder, input_data[:2])
  File "/workspace/FunASR/funasr/utils/export_utils.py", line 123, in _bladedisc_opt
    opt_model = torch_blade.optimize(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/optimization.py", line 111, in optimize
    return _optimize(model, allow_tracing, model_inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/optimization.py", line 36, in _optimize
    optimized_model = export(model, allow_tracing, model_inputs)
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 244, in export
    _model = _deepcopy(model)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 113, in _deepcopy
    _model = _deep_copy_script_module(model)
  File "/usr/local/lib/python3.8/dist-packages/torch_blade/exporter.py", line 106, in _deep_copy_script_module
    _model = copy.deepcopy(model)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/usr/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/usr/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/usr/lib/python3.8/copy.py", line 153, in deepcopy
    y = copier(memo)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/parameter.py", line 56, in __deepcopy__
    result = type(self)(self.data.clone(memory_format=torch.preserve_format), self.requires_grad)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.67 GiB total capacity; 1.31 GiB already allocated; 2.94 MiB free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Notice: ffmpeg is not installed. torchaudio is used to load audio
If you want to use ffmpeg backend to load audio, please install it by:
    sudo apt install ffmpeg # ubuntu
    # brew install ffmpeg # mac
transformer is not installed, please install it if you want to use related modules
model is not exist, begin to export /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript
funasr version: 1.1.8.
Check update of funasr, and it would cost few times. You may disable it by set `disable_update=True` in AutoModel
New version is available: 1.1.12.
Please use the command "pip install -U funasr" to upgrade.
rescale encoder modules with factor=11
I20241016 15:14:50.046036  1475 funasr-wss-server.cpp:308] Failed to download model from modelscope. If you set local asr model path, you can ignore the errors.
E20241016 15:14:50.046073  1475 funasr-wss-server.cpp:312] /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript do not exists.
zhouhuirun2015 commented 4 weeks ago

RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.67 GiB total capacity; 1.31 GiB already allocated; 2.94 MiB free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF Notice: ffmpeg is not installed. torchaudio is used to load audio If you want to use ffmpeg backend to load audio, please install it by: sudo apt install ffmpeg # ubuntu

brew install ffmpeg # mac

transformer is not installed, please install it if you want to use related modules model is not exist, begin to export /workspace/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/model_blade.torchscript funasr version: 1.1.8. Check update of funasr, and it would cost few times. You may disable it by set disable_update=True in AutoModel New version is available: 1.1.12. Please use the command "pip install -U funasr" to upgrade.