Closed lamudazh closed 3 years ago
I am using docker to test my one audio, i change a little code from demo_server.py and found below error , is it due to paddle version ? i install paddle version: paddlepaddle-gpu= 1.6.1.post97 PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105]
Can you show me your code?
我遇到了同样的问题: python deploy/demo_server.py \ --host_ip 0.0.0.0 \ --host_port 8086 \ --cutoff_prob 0.99 \ --use_gpu False \ --vocab_path models/aishell/vocab.txt \ --model_path models/aishell/ \ --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm \ --mean_std_path models/aishell/mean_std.npz 这是我启动的代码,请问我该怎么排查
File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2426, in append_op
attrs=kwargs.get("attrs", None))
File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 711, in load_vars
attrs={'file_path': os.path.join(load_dirname, filename)})
File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 668, in load_vars
filename=filename)
File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 784, in load_params
filename=filename)
File "/home/test_baidu_ai/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model
filename="params.pdparams")
File "/home/test_baidu_ai/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs
self.init_from_pretrained_model(exe, infer_program)
File "deploy/demo_server.py", line 195, in file_to_transcript
feeding_dict=data_generator.feeding)
File "deploy/demo_server.py", line 136, in warm_up_test
transcript = audio_process_handler(sample['audio_filepath'])
File "deploy/demo_server.py", line 219, in start_server
num_test_cases=3)
File "deploy/demo_server.py", line 234, in main
start_server()
File "deploy/demo_server.py", line 238, in
PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105] [operator < load_combine > error]
The default configuration of deploy/demo_server.py is for the librispeech dataset, which is an English dataset. So if you want to use Mandarin dataset like aishell, you should make sure the following configurations are correct.
About Network structure: --rnn_layer_size=1024 --use_gru=True
About warmup dataset, must be Mandarin dataset like: --warmup_manifest='data/aishell/manifest.test'
Please try config list like: python deploy/demo_server.py --host_ip 0.0.0.0 --host_port 8086 --cutoff_prob 0.99 --use_gpu False --vocab_path models/aishell/vocab.txt --model_path models/aishell/ --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm --mean_std_path models/aishell/mean_std.npz --rnn_layer_size=1024 --use_gru=True --warmup_manifest='data/aishell/manifest.test' @gnaytx @lamudazh
Warming up ...
('Warm-up Test Case %d: %s', 0, u'./dataset/aishell/data_aishell/wav/test/S0913/BAC009S0913W0464.wav')
/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py:779: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "deploy/demo_server.py", line 238, in
0 std::string paddle::platform::GetTraceBackString<char const>(char const&&, char const, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const, int)
2 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::LoadParamsFromBuffer(paddle::framework::ExecutionContext const&, paddle::platform::Place const&, std::istream, bool, std::vector<std::string, std::allocator
File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2488, in append_op
attrs=kwargs.get("attrs", None))
File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 725, in load_vars
attrs={'file_path': os.path.join(load_dirname, filename)})
File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 682, in load_vars
filename=filename)
File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 798, in load_params
filename=filename)
File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model
filename="params.pdparams")
File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs
self.init_from_pretrained_model(exe, infer_program)
File "deploy/demo_server.py", line 195, in file_to_transcript
feeding_dict=data_generator.feeding)
File "deploy/demo_server.py", line 136, in warm_up_test
transcript = audio_process_handler(sample['audio_filepath'])
File "deploy/demo_server.py", line 219, in start_server
num_test_cases=3)
File "deploy/demo_server.py", line 234, in main
start_server()
File "deploy/demo_server.py", line 238, in
这次我特意添加了--rnn_layer_size 1024 --use_gpu False 版本:paddlepaddle 1.6.2
I am using docker to test my one audio, i change a little code from demo_server.py and found below error , is it due to paddle version ? i install paddle version: paddlepaddle-gpu= 1.6.1.post97 PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105]