Closed lang101 closed 2 years ago
please give more info.
https://github.com/PaddlePaddle/PaddleSpeech/blob/44ee5cd80596b93e8ce3e3f858acba509dfe21e9/paddlespeech/server/engine/asr/online/asr_engine.py#L75-L77 These parameters (cfg_path, am_model, am_params) not empty (configured in application.yaml) occured error in L113-L114
lm_url = pretrained_models[tag]['lm_url']
lm_md5 = pretrained_models[tag]['lm_md5']
These parameters (cfg_path, am_model, am_params) not empty (configured in application.yaml) occured error in L113-L114
lm_url = pretrained_models[tag]['lm_url'] lm_md5 = pretrained_models[tag]['lm_md5']
Same as
Please provide a screenshot of the Python error. need to reproduce this problem
asr_online:
model_type: 'deepspeech2online_aishell'
am_model: '/data/model/deepspeech2online_aishell-zh-16k/exp/deepspeech2_online/checkpoints/avg_1.jit.pdmodel' # the pdmodel file of am static model [optional]
am_params: '/data/model/deepspeech2online_aishell-zh-16k/exp/deepspeech2_online/checkpoints/avg_1.jit.pdiparams' # the pdiparams file of am static model [optional]
lang: 'zh'
sample_rate: 16000
cfg_path: '/data/model/deepspeech2online_aishell-zh-16k/model.yaml'
decode_method:
force_yes: True
am_predictor_conf:
device: 'cpu' # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
(test) root@0412fc2dad04:/data/test/PaddleSpeech# paddlespeech_server start --config_file ./paddlespeech/server/conf/application.yaml
[nltk_data] Error loading averaged_perceptron_tagger: <urlopen error
[nltk_data] [Errno 101] Network is unreachable>
[nltk_data] Error loading cmudict: <urlopen error [Errno 101] Network
[nltk_data] is unreachable>
Traceback (most recent call last):
File "/usr/local/miniconda3/envs/test/bin/paddlespeech_server", line 33, in
asr_online:
model_type: 'deepspeech2online_aishell'
am_model: '/data/model/deepspeech2online_aishell-zh-16k/exp/deepspeech2_online/checkpoints/avg_1.jit.pdmodel' # the pdmodel file of am static model [optional]
am_params: '/data/model/deepspeech2online_aishell-zh-16k/exp/deepspeech2_online/checkpoints/avg_1.jit.pdiparams' # the pdiparams file of am static model [optional]
lang: 'zh'
sample_rate: 16000
cfg_path: '/data/model/deepspeech2online_aishell-zh-16k/model.yaml'
decode_method:
force_yes: True
am_predictor_conf:
device: 'cpu' # set 'gpu:id' or 'cpu'
switch_ir_optim: True
glog_info: False # True -> print glog
summary: True # False -> do not show predictor config
chunk_buffer_conf:
frame_duration_ms: 80
shift_ms: 40
sample_rate: 16000
sample_width: 2
vad_conf:
aggressiveness: 2
sample_rate: 16000
frame_duration_ms: 20
sample_width: 2
padding_ms: 200
padding_ratio: 0.9
(test) root@0412fc2dad04:/data/test/PaddleSpeech# paddlespeech_server start --config_file ./paddlespeech/server/conf/ws_application.yaml
Same as previously shown
The problem of tag has been solved in the latest code, thank you for raising the question
The problem of tag has been solved in the latest code. 'tag' should be written above 'if',fixed.
Error occured with 'cfg_path', 'am_model', 'am_params' are not none. So, local variable 'tag' must be assigned.