PaddlePaddle / models

Officially maintained, supported by PaddlePaddle, including CV, NLP, Speech, Rec, TS, big models and so on.
Apache License 2.0
6.9k stars 2.91k forks source link

ocr识别模型部署时遇到Error: The width of each timestep in Input(Label) should be 1. #4312

Closed endy-see closed 4 years ago

endy-see commented 4 years ago

具体错误信息如下:

--- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [is_test_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [conv_affine_channel_fuse_pass] --- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] I0218 13:31:01.658505 36056 graph_pattern_detector.cc:96] --- detected 8 subgraphs --- Running IR pass [multihead_matmul_fuse_pass] --- Running IR pass [fc_fuse_pass] I0218 13:31:01.665282 36056 graph_pattern_detector.cc:96] --- detected 2 subgraphs --- Running IR pass [fc_elementwise_layernorm_fuse_pass] --- Running IR pass [conv_elementwise_add_act_fuse_pass] I0218 13:31:01.667009 36056 graph_pattern_detector.cc:96] --- detected 8 subgraphs --- Running IR pass [conv_elementwise_add2_act_fuse_pass] --- Running IR pass [conv_elementwise_add_fuse_pass] --- Running IR pass [transpose_flatten_concat_fuse_pass] --- Running IR pass [runtime_context_cache_pass] --- Running analysis [ir_params_sync_among_devices_pass] I0218 13:31:01.671290 36056 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I0218 13:31:01.678493 36056 memory_optimize_pass.cc:223] Cluster name : gru_0.tmp_0 size: 800 I0218 13:31:01.678519 36056 memory_optimize_pass.cc:223] Cluster name : gru_0.tmp_2 size: 800 I0218 13:31:01.678534 36056 memory_optimize_pass.cc:223] Cluster name : batch_norm_3.tmp_3 size: 3072 I0218 13:31:01.678550 36056 memory_optimize_pass.cc:223] Cluster name : batch_norm_1.tmp_3 size: 3072 I0218 13:31:01.678565 36056 memory_optimize_pass.cc:223] Cluster name : pixel size: 192 I0218 13:31:01.678618 36056 memory_optimize_pass.cc:223] Cluster name : fc_0.tmp_1 size: 2400 I0218 13:31:01.678634 36056 memory_optimize_pass.cc:223] Cluster name : gru_0.tmp_3 size: 800 --- Running analysis [ir_graph_to_program_pass] I0218 13:31:01.685699 36056 analysis_predictor.cc:470] ======= optimize end ======= I0218 13:31:01.685760 36056 naive_executor.cc:105] --- skip [feed], feed -> pixel I0218 13:31:01.686285 36056 naive_executor.cc:105] --- skip [gru_0.tmp_2], fetch -> fetch I0218 13:31:01.686311 36056 naive_executor.cc:105] --- skip [fc_0.tmp_1], fetch -> fetch I0218 13:31:01.686329 36056 naive_executor.cc:105] --- skip [batch_norm_1.tmp_3], fetch -> fetch W0218 13:31:01.996632 36056 device_context.cc:236] Please NOTE: device: 1, CUDA Capability: 61, Driver API Version: 10.0, Runtime API Version: 9.0 W0218 13:31:02.002913 36056 device_context.cc:244] device: 1, cuDNN Version: 7.6. terminate called after throwing an instance of 'paddle::platform::EnforceNotMet' what():


C++ Call Stacks (More useful to developers):

0 std::cxx11::basic_string<char, std::char_traits, std::allocator > paddle::platform::GetTraceBackString<std::cxx11::basic_string<char, std::char_traits, std::allocator > >(std::cxx11::basic_string<char, std::char_traits, std::allocator >&&, char const*, int) 1 paddle::platform::EnforceNotMet::EnforceNotMet(std::cxx11::basic_string<char, std::char_traits, std::allocator > const&, char const, int) 2 paddle::operators::WarpCTCKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const 3 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::WarpCTCKernel<paddle::platform::CUDADeviceContext, float> >::operator()(char const, char const, int) const::{lambda(paddle::framework::ExecutionContext const&)#1}>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&) 4 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext) const 5 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const 6 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&) 7 paddle::framework::NaiveExecutor::Run() 8 paddle::AnalysisPredictor::ZeroCopyRun()


Python Call Stacks (More useful to users):

File "/root/miniconda3/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2459, in append_op attrs=kwargs.get("attrs", None)) File "/root/miniconda3/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op return self.main_program.current_block().append_op(*args, **kwargs) File "/root/miniconda3/lib/python3.6/site-packages/paddle/fluid/layers/nn.py", line 7526, in warpctc 'norm_by_times': norm_by_times, File "/data/xxx/projects/models-1.6/models/PaddleCV/ocr_recognition/crnn_ctc_model.py", line 202, in ctc_train_net input=fc_out, label=label, blank=num_classes, norm_by_times=True) File "train.py", line 84, in train args, data_shape, num_classes) File "train.py", line 254, in main train(args) File "train.py", line 258, in main()


Error Message Summary:

Error: The width of each timestep in Input(Label) should be 1. [Hint: Expected label_dims[0] == label->numel(), but received label_dims[0]:48 != label->numel():9600.] at (/home/install/Paddle/paddle/fluid/operators/warpctc_op.h:170) [operator < warpctc > error] run_impl.sh: line 28: 36056 Aborted (core dumped) ./${DEMO_NAME}

wanghaoshuang commented 4 years ago
  1. 模型部署用作预测时,不应该有Input(label)的。可以在调用save_inference_model接口时,从feeded_var_names中去掉label, target_vars指定为fc的output, 而不是loss或accuracy。

  2. 如果需要用label, label的shape需要是[N, 1], 其中N是所有label id的总数,请参考这里设置正确的label shape: https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/ocr_recognition/utility.py#L88

endy-see commented 4 years ago
  1. 模型部署用作预测时,不应该有Input(label)的。可以在调用save_inference_model接口时,从feeded_var_names中去掉label, target_vars指定为fc的output, 而不是loss或accuracy。
  2. 如果需要用label, label的shape需要是[N, 1], 其中N是所有label id的总数,请参考这里设置正确的label shape: https://github.com/PaddlePaddle/models/blob/develop/PaddleCV/ocr_recognition/utility.py#L88

出错的原因是因为我把训练时候保存的inference model用于c++ 的infer model了,导致在预测时加载的网络仍然是训练时候的网络,而c++的infer无法使用训练的网络。 解决办法是,在python做infer时再次将训练时候保存的inference model用infer net再保存一次