PaddlePaddle / PaddleSpeech

Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
https://paddlespeech.readthedocs.io
Apache License 2.0
11.15k stars 1.85k forks source link

PaddleCheckError:not allowed to load partial data via load_combine_op #387

Closed lamudazh closed 3 years ago

lamudazh commented 5 years ago

I am using docker to test my one audio, i change a little code from demo_server.py and found below error , is it due to paddle version ? i install paddle version: paddlepaddle-gpu= 1.6.1.post97 PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105]

lfchener commented 5 years ago

I am using docker to test my one audio, i change a little code from demo_server.py and found below error , is it due to paddle version ? i install paddle version: paddlepaddle-gpu= 1.6.1.post97 PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105]

Can you show me your code?

gnaytx commented 4 years ago

我遇到了同样的问题: python deploy/demo_server.py \ --host_ip 0.0.0.0 \ --host_port 8086 \ --cutoff_prob 0.99 \ --use_gpu False \ --vocab_path models/aishell/vocab.txt \ --model_path models/aishell/ \ --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm \ --mean_std_path models/aishell/mean_std.npz 这是我启动的代码,请问我该怎么排查

这是错误栈Python Call Stacks (More useful to users):

File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2426, in append_op attrs=kwargs.get("attrs", None)) File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 711, in load_vars attrs={'file_path': os.path.join(load_dirname, filename)}) File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 668, in load_vars filename=filename) File "/root/miniconda3/envs/paddle/lib/python2.7/site-packages/paddle/fluid/io.py", line 784, in load_params filename=filename) File "/home/test_baidu_ai/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/test_baidu_ai/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "deploy/demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "deploy/demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "deploy/demo_server.py", line 219, in start_server num_test_cases=3) File "deploy/demo_server.py", line 234, in main start_server() File "deploy/demo_server.py", line 238, in main()


Error Message Summary:

PaddleCheckError: You are not allowed to load partial data via load_combine_op, use load_op instead. at [/paddle/paddle/fluid/operators/load_combine_op.h:105] [operator < load_combine > error]

lfchener commented 4 years ago

The default configuration of deploy/demo_server.py is for the librispeech dataset, which is an English dataset. So if you want to use Mandarin dataset like aishell, you should make sure the following configurations are correct.

About Network structure: --rnn_layer_size=1024 --use_gru=True

About warmup dataset, must be Mandarin dataset like: --warmup_manifest='data/aishell/manifest.test'

Please try config list like: python deploy/demo_server.py --host_ip 0.0.0.0 --host_port 8086 --cutoff_prob 0.99 --use_gpu False --vocab_path models/aishell/vocab.txt --model_path models/aishell/ --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm --mean_std_path models/aishell/mean_std.npz --rnn_layer_size=1024 --use_gru=True --warmup_manifest='data/aishell/manifest.test' @gnaytx @lamudazh

bigcash commented 4 years ago

@lfchener 您好,我也遇到了这个问题。我又在虚拟机的centos7的cpu环境中测试了启动命令,也报了这个错误(在服务器centos8的gpu环境也是这个错)。 在cpu中执行以下启动命令: (baidu27) [lingbao@centos702 DeepSpeech]$ python deploy/demo_server.py --host_ip 0.0.0.0 --host_port 10010 --warmup_manifest data/aishell/manifest.test --mean_std_path models/aishell/mean_std.npz --vocab_path models/aishell/vocab.txt --model_path models/aishell/ --lang_model_path models/lm/zh_giga.no_cna_cmn.prune01244.klm --cutoff_prob 0.99 --rnn_layer_size 1024 --use_gpu False ----------- Configuration Arguments ----------- alpha: 2.5 beam_size: 500 beta: 0.3 cutoff_prob: 0.99 cutoff_top_n: 40 decoding_method: ctc_beam_search host_ip: 0.0.0.0 host_port: 10010 lang_model_path: models/lm/zh_giga.no_cna_cmn.prune01244.klm mean_std_path: models/aishell/mean_std.npz model_path: models/aishell/ num_conv_layers: 2 num_rnn_layers: 3 rnn_layer_size: 1024 share_rnn_weights: True specgram_type: linear speech_save_dir: demo_cache use_gpu: 0 use_gru: False vocab_path: models/aishell/vocab.txt warmup_manifest: data/aishell/manifest.test

2020-01-03 11:47:07,213-INFO: begin to initialize the external scorer for decoding 2020-01-03 11:47:13,632-INFO: language model: is_character_based = 1, max_order = 5, dict_size = 0 2020-01-03 11:47:13,632-INFO: end initializing scorer

Warming up ... ('Warm-up Test Case %d: %s', 0, u'./dataset/aishell/data_aishell/wav/test/S0913/BAC009S0913W0464.wav') /home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py:779: UserWarning: The following exception is not an EOF exception. "The following exception is not an EOF exception.") Traceback (most recent call last): File "deploy/demo_server.py", line 238, in main() File "deploy/demo_server.py", line 234, in main start_server() File "deploy/demo_server.py", line 219, in start_server num_test_cases=3) File "deploy/demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "deploy/demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 798, in load_params filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 682, in load_vars filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 726, in load_vars executor.run(load_prog) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 780, in run six.reraise(*sys.exc_info()) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 775, in run use_program_cache=use_program_cache) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 822, in _run_impl use_program_cache=use_program_cache) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/executor.py", line 899, in _run_program fetch_var_name) paddle.fluid.core_avx.EnforceNotMet:


C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<char const>(char const&&, char const, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const, int) 2 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::LoadParamsFromBuffer(paddle::framework::ExecutionContext const&, paddle::platform::Place const&, std::istream, bool, std::vector<std::string, std::allocator > const&) const 3 paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 4 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CPUPlace, false, 0ul, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, float>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, double>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, int>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, signed char>, paddle::operators::LoadCombineOpKernel<paddle::platform::CPUDeviceContext, long> >::operator()(char const, char const, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext) const 6 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 7 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 8 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext, paddle::framework::Scope, bool, bool, bool) 9 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocator > const&, bool)


Python Call Stacks (More useful to users):

File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/framework.py", line 2488, in append_op attrs=kwargs.get("attrs", None)) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 725, in load_vars attrs={'file_path': os.path.join(load_dirname, filename)}) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 682, in load_vars filename=filename) File "/home/lingbao/anaconda3/envs/baidu27/lib/python2.7/site-packages/paddle/fluid/io.py", line 798, in load_params filename=filename) File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 161, in init_from_pretrained_model filename="params.pdparams") File "/home/lingbao/work/DeepSpeech/deploy/../model_utils/model.py", line 412, in infer_batch_probs self.init_from_pretrained_model(exe, infer_program) File "deploy/demo_server.py", line 195, in file_to_transcript feeding_dict=data_generator.feeding) File "deploy/demo_server.py", line 136, in warm_up_test transcript = audio_process_handler(sample['audio_filepath']) File "deploy/demo_server.py", line 219, in start_server num_test_cases=3) File "deploy/demo_server.py", line 234, in main start_server() File "deploy/demo_server.py", line 238, in main()


Error Message Summary:

Error: You are not allowed to load partial data via load_combine_op, use load_op instead. at (/paddle/paddle/fluid/operators/load_combine_op.h:105) [operator < load_combine > error]

这次我特意添加了--rnn_layer_size 1024 --use_gpu False 版本:paddlepaddle 1.6.2