PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.29k stars 5.61k forks source link

使用paddle-infer的c++预测过程中出错:The conv2d Op's Input Variable `Input` contains uninitialized Tensor #44814

Closed taoge2222 closed 1 year ago

taoge2222 commented 2 years ago

bug描述 Describe the Bug

平台:本地运行i7-7700+3090,paddlepaddle-2.31,ubuntu-16.04,cuda11.2,python3.6,g++5.5

bug描述: 测试了2个模型,ppyolo_mbv3_large_coco模型和picodet_l_320_coco_lcnet模型。 1、https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/picodet 上方链接的picodet中的[w/ 后处理]的所有模型,在使用paddle-infer的c++预测过程中出错;所使用例程为官方https://github.com/PaddlePaddle/Paddle-Inference-Demo/c++/cpu/resnet50/resnet50_test.cc。 std::vector input_shape = {FLAGS_batch_size, 3, 224, 224}; std::vector input_data(FLAGS_batch_size 3 224 * 224); 其中输入数据改为3,320,320。其它不变,编译后运行 ./resnet50_320 --use_ort true --model_file picodet_l_320_coco_lcnet/model.pdmodel --params_file picodet_l_320_coco_lcnet/model.pdiparams 出错信息如下: WARNING: Logging before InitGoogleLogging() is written to STDERR W0802 17:04:37.169759 5585 analysis_predictor.cc:2129] Paddle2ONNX do't support convert the Model, fall back to using Paddle Inference. --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [layer_norm_fuse_pass] --- Fused 0 subgraphs into layer_norm op. --- Running IR pass [attention_lstm_fuse_pass] --- Running IR pass [seqconv_eltadd_relu_fuse_pass] --- Running IR pass [seqpool_cvm_concat_fuse_pass] --- Running IR pass [mul_lstm_fuse_pass] --- Running IR pass [fc_gru_fuse_pass] --- fused 0 pairs of fc gru patterns --- Running IR pass [mul_gru_fuse_pass] --- Running IR pass [seq_concat_fc_fuse_pass] --- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass] --- Running IR pass [matmul_v2_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass] W0802 17:04:37.280578 5585 gpu_cpu_map_matmul_to_mul_pass.cc:425] matmul op not support broadcast, please check inputs'shape. W0802 17:04:37.280616 5585 gpu_cpu_map_matmul_to_mul_pass.cc:425] matmul op not support broadcast, please check inputs'shape. W0802 17:04:37.280663 5585 gpu_cpu_map_matmul_to_mul_pass.cc:425] matmul op not support broadcast, please check inputs'shape. W0802 17:04:37.280702 5585 gpu_cpu_map_matmul_to_mul_pass.cc:425] matmul op not support broadcast, please check inputs'shape. --- Running IR pass [matmul_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] --- Running IR pass [repeated_fc_relu_fuse_pass] --- Running IR pass [squared_mat_sub_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] I0802 17:04:37.379978 5585 fuse_pass_base.cc:59] --- detected 57 subgraphs --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [conv_transpose_bn_fuse_pass] --- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass] --- Running IR pass [is_test_pass] --- Running IR pass [runtime_context_cache_pass] --- Running analysis [ir_params_sync_among_devices_pass] --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I0802 17:04:37.441790 5585 memory_optimize_pass.cc:218] Cluster name : scale_factor size: 8 I0802 17:04:37.441803 5585 memory_optimize_pass.cc:218] Cluster name : hardswish_59.tmp_0 size: 819200 I0802 17:04:37.441807 5585 memory_optimize_pass.cc:218] Cluster name : reshape2_5.tmp_0 size: 6400 I0802 17:04:37.441809 5585 memory_optimize_pass.cc:218] Cluster name : hardswish_45.tmp_0 size: 6553600 I0802 17:04:37.441813 5585 memory_optimize_pass.cc:218] Cluster name : image size: 1228800 I0802 17:04:37.441815 5585 memory_optimize_pass.cc:218] Cluster name : reshape2_8.tmp_0 size: 1600 I0802 17:04:37.441818 5585 memory_optimize_pass.cc:218] Cluster name : hardswish_69.tmp_0 size: 409600 I0802 17:04:37.441821 5585 memory_optimize_pass.cc:218] Cluster name : hardswish_78.tmp_0 size: 2048000 I0802 17:04:37.441824 5585 memory_optimize_pass.cc:218] Cluster name : reshape2_2.tmp_0 size: 25600 I0802 17:04:37.441828 5585 memory_optimize_pass.cc:218] Cluster name : batch_norm_2.tmp_2 size: 6553600 I0802 17:04:37.441830 5585 memory_optimize_pass.cc:218] Cluster name : conv2d_156.tmp_0 size: 204800 I0802 17:04:37.441833 5585 memory_optimize_pass.cc:218] Cluster name : hardswish_92.tmp_0 size: 64000 I0802 17:04:37.441835 5585 memory_optimize_pass.cc:218] Cluster name : tmp_0 size: 16000 --- Running analysis [ir_graph_to_program_pass] I0802 17:04:37.556859 5585 analysis_predictor.cc:1265] ======= optimize end ======= I0802 17:04:37.566126 5585 naive_executor.cc:110] --- skip [feed], feed -> scale_factor I0802 17:04:37.566146 5585 naive_executor.cc:110] --- skip [feed], feed -> image I0802 17:04:37.571050 5585 naive_executor.cc:110] --- skip [multiclass_nms3_0.tmp_0], fetch -> fetch I0802 17:04:37.571064 5585 naive_executor.cc:110] --- skip [multiclass_nms3_0.tmp_2], fetch -> fetch terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what():

Compile Traceback (most recent call last): File "tools/export_model.py", line 111, in main() File "tools/export_model.py", line 107, in main run(FLAGS, cfg) File "tools/export_model.py", line 73, in run trainer.export(FLAGS.output_dir) File "/paddle/hk/PaddleDetection/ppdet/engine/trainer.py", line 718, in export static_model, pruned_input_spec = self._get_infer_cfg_and_input_spec( File "/paddle/hk/PaddleDetection/ppdet/engine/trainer.py", line 696, in _get_infer_cfg_and_input_spec input_spec, static_model.forward.main_program, File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 542, in main_program concrete_program = self.concrete_program File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 458, in concrete_program return self.concrete_program_specify_input_spec(input_spec=None) File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 495, in concrete_program_specify_input_spec concreteprogram, = self.get_concrete_program( File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 406, in get_concrete_program concrete_program, partial_program_layer = self._program_cache[cache_key] File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 723, in getitem self._caches[item] = self._build_once(item) File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 709, in _build_once concrete_program = ConcreteProgram.from_func_spec( File "", line 2, in from_func_spec

File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
  return wrapped_func(*args, **kwargs)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/base.py", line 40, in __impl__
  return func(*args, **kwargs)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 662, in from_func_spec
  outputs = static_func(*inputs)
File "/tmp/tmpue2kxzsr.py", line 99, in forward
  out = paddle.jit.dy2static.convert_ifelse(self.training, true_fn_5,
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 271, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/tmp/tmpue2kxzsr.py", line 82, in false_fn_5
  ] = paddle.jit.dy2static.convert_while_loop(for_loop_condition_0,
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 45, in convert_while_loop
  loop_vars = _run_py_while(cond, body, loop_vars)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 59, in _run_py_while
  loop_vars = body(*loop_vars)
File "/paddle/hk/PaddleDetection/ppdet/modeling/architectures/meta_arch.py", line 75, in forward
  outs.append(self.get_pred())
File "/tmp/tmpxovd4249.py", line 42, in get_pred
  __return_value_0 = paddle.jit.dy2static.convert_ifelse(paddle.jit.
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 271, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/tmp/tmpxovd4249.py", line 38, in false_fn_7
  __return_value_0, output = paddle.jit.dy2static.convert_ifelse(self
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 271, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/paddle/hk/PaddleDetection/ppdet/modeling/architectures/picodet.py", line 89, in get_pred
  bbox_pred, bbox_num = self._forward()
File "/tmp/tmpxe8c720h.py", line 34, in _forward
  __return_value_1 = paddle.jit.dy2static.convert_ifelse(paddle.jit.
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 271, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/paddle/hk/PaddleDetection/ppdet/modeling/architectures/picodet.py", line 71, in _forward
  bboxes, bbox_num = self.head.post_process(
File "/tmp/tmpty_j77vl.py", line 32, in post_process
  __return_value_4, pred_bboxes, scale_factor = (paddle.jit.dy2static.
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 271, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/paddle/hk/PaddleDetection/ppdet/modeling/heads/pico_head.py", line 775, in post_process
  scale_y, scale_x = paddle.split(scale_factor, 2, axis=-1)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/tensor/manipulation.py", line 849, in split
  return paddle.fluid.layers.split(
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/layers/nn.py", line 5028, in split
  helper.append_op(
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op
  return self.main_program.current_block().append_op(*args, **kwargs)
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/framework.py", line 3178, in append_op
  op = Operator(
File "/paddle/anaconda3/lib/python3.8/site-packages/paddle/fluid/framework.py", line 2224, in __init__
  for frame in traceback.extract_stack():

C++ Traceback (most recent call last):


Error Message Summary:

InvalidArgumentError: The split Op's Input Variable X contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /media/wht-gpu/H/linux/Paddle/paddle/fluid/framework/operator.cc:2382) [operator < split > error] 已放弃

但是如果使用python deploy/python/infer.py进行预测,包含后处理但不包含NMS的方式导出模型检测不到目标。其它组合方式模型都可以正常预测。 2、ppyolo_mbv3_large_coco模型,不管导出是否后处理,是否NMS,模型均无法运行,错误信息为: ./resnet50_320 --use_ort true --model_file ppyolo_mbv3_large_coco/model.pdmodel --params_file ppyolo_mbv3_large_coco_wo/model.pdiparams WARNING: Logging before InitGoogleLogging() is written to STDERR W0802 17:05:54.514226 5602 analysis_predictor.cc:2129] Paddle2ONNX do't support convert the Model, fall back to using Paddle Inference. --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [layer_norm_fuse_pass] --- Fused 0 subgraphs into layer_norm op. --- Running IR pass [attention_lstm_fuse_pass] --- Running IR pass [seqconv_eltadd_relu_fuse_pass] --- Running IR pass [seqpool_cvm_concat_fuse_pass] --- Running IR pass [mul_lstm_fuse_pass] --- Running IR pass [fc_gru_fuse_pass] --- fused 0 pairs of fc gru patterns --- Running IR pass [mul_gru_fuse_pass] --- Running IR pass [seq_concat_fc_fuse_pass] --- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass] --- Running IR pass [matmul_v2_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass] --- Running IR pass [matmul_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] --- Running IR pass [repeated_fc_relu_fuse_pass] --- Running IR pass [squared_mat_sub_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] I0802 17:05:54.647419 5602 fuse_pass_base.cc:59] --- detected 37 subgraphs --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [conv_transpose_bn_fuse_pass] --- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass] --- Running IR pass [is_test_pass] --- Running IR pass [runtime_context_cache_pass] --- Running analysis [ir_params_sync_among_devices_pass] --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I0802 17:05:54.686399 5602 memory_optimize_pass.cc:218] Cluster name : fill_constant_37.tmp_0 size: 4 I0802 17:05:54.686411 5602 memory_optimize_pass.cc:218] Cluster name : batch_norm_7.tmp_2 size: 1843200 I0802 17:05:54.686414 5602 memory_optimize_pass.cc:218] Cluster name : conv2d_73.tmp_0 size: 6553600 I0802 17:05:54.686416 5602 memory_optimize_pass.cc:218] Cluster name : elementwise_add_5 size: 128000 I0802 17:05:54.686419 5602 memory_optimize_pass.cc:218] Cluster name : scale_factor size: 8 I0802 17:05:54.686420 5602 memory_optimize_pass.cc:218] Cluster name : batch_norm_4.tmp_2 size: 6553600 I0802 17:05:54.686424 5602 memory_optimize_pass.cc:218] Cluster name : fill_constant_33.tmp_0 size: 8 I0802 17:05:54.686425 5602 memory_optimize_pass.cc:218] Cluster name : batch_norm_3.tmp_2 size: 1638400 I0802 17:05:54.686427 5602 memory_optimize_pass.cc:218] Cluster name : pool2d_10.tmp_0 size: 64000 I0802 17:05:54.686429 5602 memory_optimize_pass.cc:218] Cluster name : im_shape size: 8 I0802 17:05:54.686431 5602 memory_optimize_pass.cc:218] Cluster name : image size: 1228800 I0802 17:05:54.686434 5602 memory_optimize_pass.cc:218] Cluster name : shape_4.tmp_0_slice_0 size: 4 --- Running analysis [ir_graph_to_program_pass] I0802 17:05:54.777642 5602 analysis_predictor.cc:1265] ======= optimize end ======= I0802 17:05:54.782935 5602 naive_executor.cc:110] --- skip [feed], feed -> scale_factor I0802 17:05:54.782950 5602 naive_executor.cc:110] --- skip [feed], feed -> image I0802 17:05:54.782967 5602 naive_executor.cc:110] --- skip [feed], feed -> im_shape I0802 17:05:54.786860 5602 naive_executor.cc:110] --- skip [multiclass_nms3_0.tmp_0], fetch -> fetch I0802 17:05:54.786867 5602 naive_executor.cc:110] --- skip [multiclass_nms3_0.tmp_2], fetch -> fetch terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what():

Compile Traceback (most recent call last): File "tools/export_model.py", line 111, in main() File "tools/export_model.py", line 107, in main run(FLAGS, cfg) File "tools/export_model.py", line 73, in run trainer.export(FLAGS.output_dir) File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/engine/trainer.py", line 775, in export save_dir) File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/engine/trainer.py", line 752, in _get_infer_cfg_and_input_spec input_spec, static_model.forward.main_program, File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 580, in main_program concrete_program = self.concrete_program File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 488, in concrete_program return self.concrete_program_specify_input_spec(input_spec=None) File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 528, in concrete_program_specify_input_spec *desired_input_spec, with_hook=with_hook) File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 436, in get_concrete_program concrete_program, partial_program_layer = self._program_cache[cache_key] File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 801, in getitem self._caches[item_id] = self._build_once(item) File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 790, in _build_once **cache_key.kwargs) File "", line 2, in from_func_spec

File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
  return wrapped_func(*args, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/base.py", line 51, in __impl__
  return func(*args, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/program_translator.py", line 733, in from_func_spec
  outputs = static_func(*inputs)
File "/tmp/tmpjbf_sta8.py", line 101, in forward
  false_fn_5, (), (inputs, self), (out,))
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 211, in convert_ifelse
  out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 257, in _run_py_ifelse
  return true_fn(*true_args) if pred else false_fn(*false_args)
File "/tmp/tmpjbf_sta8.py", line 84, in false_fn_5
  for_loop_body_0, [inputs_list, __for_loop_var_index_0])
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 45, in convert_while_loop
  loop_vars = _run_py_while(cond, body, loop_vars)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/dygraph_to_static/convert_operators.py", line 59, in _run_py_while
  loop_vars = body(*loop_vars)
File "/tmp/tmpjbf_sta8.py", line 79, in for_loop_body_0
  dy2static.convert_call(self.get_pred)())
File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/modeling/architectures/yolo.py", line 128, in get_pred
  return self._forward()
File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/modeling/architectures/yolo.py", line 79, in _forward
  body_feats = self.backbone(self.inputs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
  return self._dygraph_call_func(*inputs, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
  outputs = self.forward(*inputs, **kwargs)
File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/modeling/backbones/mobilenet_v3.py", line 455, in forward
  x = self.conv1(inputs['image'])
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
  return self._dygraph_call_func(*inputs, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
  outputs = self.forward(*inputs, **kwargs)
File "/media/wht-gpu/F/python_program/PaddlePaddle_models/PaddleCV/PaddleDetection_2.4/ppdet/modeling/backbones/mobilenet_v3.py", line 90, in forward
  x = self.conv(x)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
  return self._dygraph_call_func(*inputs, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
  outputs = self.forward(*inputs, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/nn/layer/conv.py", line 678, in forward
  use_cudnn=self._use_cudnn)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/nn/functional/conv.py", line 169, in _conv_nd
  type=op_type, inputs=inputs, outputs=outputs, attrs=attrs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/layer_helper.py", line 44, in append_op
  return self.main_program.current_block().append_op(*args, **kwargs)
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/framework.py", line 3621, in append_op
  attrs=kwargs.get("attrs", None))
File "/home/wht-gpu/.virtualenvs/paddle2/lib/python3.6/site-packages/paddle/fluid/framework.py", line 2635, in __init__
  for frame in traceback.extract_stack():

C++ Traceback (most recent call last):


Error Message Summary:

InvalidArgumentError: The conv2d Op's Input Variable Input contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /media/wht-gpu/H/linux/Paddle/paddle/fluid/framework/operator.cc:2382) [operator < conv2d > error] 已放弃


但是如果使用python deploy/python/infer.py进行预测,均可以运行。

本人眼拙,已经找了过去所有issue,也没找到解决方案,望重视。更希望官方解决contains uninitialized Tensor问题的bug

其他补充信息 Additional Supplementary Information

No response

paddle-bot[bot] commented 2 years ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

taoge2222 commented 2 years ago

picodet还有[w/o 后处理]部分的模型能用,不太影响。ppyolo_mbv3_large_coco所有的模型均无法运行就比较郁闷

heavengate commented 2 years ago

https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/deploy/cpp PaddleDetection库里提供了C++ inference的代码,可以参考这个

taoge2222 commented 2 years ago

我能把整个paddle-inference的c++源代码都看一遍就太难了,这是在为难人啊

jerrywgz commented 2 years ago

上面这个是paddledetection 基于c++ inference的部署示例代码,工程量不大,可以参考下

paddle-bot[bot] commented 1 year ago

Since you haven\'t replied for more than a year, we have closed this issue/pr. If the problem is not solved or there is a follow-up one, please reopen it at any time and we will continue to follow up. 由于您超过一年未回复,我们将关闭这个issue/pr。 若问题未解决或有后续问题,请随时重新打开,我们会继续跟进。

JianyuZhan commented 3 months ago

我也遇到这个问题:使用PaddleDetection, 按照此文档,训练一个布局识别模型,并导出 :https://github.com/PaddlePaddle/PaddleOCR/blob/main/ppstructure/layout/README_ch.md#72-%E6%A8%A1%E5%9E%8B%E6%8E%A8%E7%90%86

然后使用PaddleDetection/deploy/python/infer.py脚本去推理,是ok的;

但是,如果使用ppocr的c++推理版本(按此文档编译:https://github.com/PaddlePaddle/PaddleOCR/blob/main/deploy/cpp_infer/readme_ch.md)就会报错误

InvalidArgumentError: The split Op's Input Variable X contains uninitialized Tensor. [Hint: Expected t->IsInitialized() == true, but received t->IsInitialized():0 != true:1.] (at /paddle/paddle/fluid/framework/operator.cc:2094) [operator < split > error] Aborted (core dumped)

但是,如果ppocr使用官方自己的布局识别模型picodet_lcnet_x1_0_fgd_layout_cdla_infer就又可以。

然后,我导出的模型和官方的模型对比如下: 官方: picodet_lcnet_x1_0_fgd_layout_cdla_infer/ -rw-rw-r-- 1 ubuntu ubuntu 7293079 Jul 29 06:33 inference.pdiparams -rw-rw-r-- 1 ubuntu ubuntu 44935 Jul 29 06:33 inference.pdiparams.info -rw-rw-r-- 1 ubuntu ubuntu 2763579 Jul 29 06:33 inference.pdmodel

我导出的模型: picodet_lcnet_x1_0_layout/ total 7520 -rw-rw-r-- 1 ubuntu docker 543 Jul 25 03:38 infer_cfg.yml -rw-rw-r-- 1 ubuntu docker 7404129 Jul 25 03:39 model.pdiparams -rw-rw-r-- 1 ubuntu docker 44951 Jul 25 03:39 model.pdiparams.info -rw-rw-r-- 1 ubuntu docker 242176 Jul 25 03:39 model.pdmodel