PaddlePaddle / Paddle3D

A 3D computer vision development toolkit based on PaddlePaddle. It supports point-cloud object detection, segmentation, and monocular 3D object detection models.
Apache License 2.0
567 stars 141 forks source link

smoke C++ 推理报错 #372

Open hunagjingwei opened 1 year ago

hunagjingwei commented 1 year ago

./build/infer --model_file /home/hjw/projects/Paddle3D-develop/deploy/smoke/cpp/exported_model/smoke.pdmodel --params_file /home/hjw/projects/Paddle3D-develop/deploy/smoke/cpp/exported_model/smoke.pdiparams --image /home/hjw/projects/mmdetection3d/demo/data/kitti/000008.png WARNING: Logging before InitGoogleLogging() is written to STDERR I0616 17:04:52.870446 1718 analysis_predictor.cc:964] MKLDNN is enabled --- Running analysis [ir_graph_build_pass] --- Running analysis [ir_graph_clean_pass] --- Running analysis [ir_analysis_pass] --- Running IR pass [mkldnn_placement_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [layer_norm_fuse_pass] --- Fused 0 subgraphs into layer_norm op. --- Running IR pass [attention_lstm_fuse_pass] --- Running IR pass [seqconv_eltadd_relu_fuse_pass] --- Running IR pass [seqpool_cvm_concat_fuse_pass] --- Running IR pass [mul_lstm_fuse_pass] --- Running IR pass [fc_gru_fuse_pass] --- fused 0 pairs of fc gru patterns --- Running IR pass [mul_gru_fuse_pass] --- Running IR pass [seq_concat_fc_fuse_pass] --- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass] --- Running IR pass [matmul_v2_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass] I0616 17:04:52.944406 1718 fuse_pass_base.cc:57] --- detected 1 subgraphs --- Running IR pass [matmul_scale_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] --- Running IR pass [repeated_fc_relu_fuse_pass] --- Running IR pass [squared_mat_sub_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [conv_transpose_bn_fuse_pass] --- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass] --- Running IR pass [is_test_pass] --- Running IR pass [runtime_context_cache_pass] --- Running IR pass [depthwise_conv_mkldnn_pass] --- Running IR pass [conv_bn_fuse_pass] --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [conv_transpose_bn_fuse_pass] --- Running IR pass [conv_transpose_eltwiseadd_bn_fuse_pass] --- Running IR pass [conv_bias_mkldnn_fuse_pass] I0616 17:04:52.973549 1718 fuse_pass_base.cc:57] --- detected 20 subgraphs --- Running IR pass [conv_transpose_bias_mkldnn_fuse_pass] --- Running IR pass [conv_elementwise_add_mkldnn_fuse_pass] --- Fused 0 projection conv (as y) + elementwise_add patterns --- Fused 0 conv (as x) + elementwise_add patterns --- Fused 0 conv (as y) + elementwise_add patterns --- Running IR pass [conv_concat_relu_mkldnn_fuse_pass] --- Running IR pass [conv_relu_mkldnn_fuse_pass] --- Running IR pass [conv_leaky_relu_mkldnn_fuse_pass] --- Running IR pass [conv_relu6_mkldnn_fuse_pass] --- Running IR pass [conv_swish_mkldnn_fuse_pass] --- Running IR pass [conv_hard_swish_mkldnn_fuse_pass] --- Running IR pass [conv_mish_mkldnn_fuse_pass] --- Running IR pass [conv_hard_sigmoid_mkldnn_fuse_pass] --- Running IR pass [conv_gelu_mkldnn_fuse_pass] --- Running IR pass [scale_matmul_fuse_pass] --- fused 0 scale with matmul --- Running IR pass [reshape_transpose_matmul_mkldnn_fuse_pass] --- Fused 0 ReshapeTransposeMatmul patterns for matmul Op --- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with transpose's xshape --- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshape --- Fused 0 ReshapeTransposeMatmul patterns for matmul Op with reshape's xshape with transpose's xshape --- Running IR pass [reshape_transpose_matmul_v2_mkldnn_fuse_pass] --- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op --- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with transpose's xshape --- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshape --- Fused 0 ReshapeTransposeMatmul patterns for matmul_v2 Op with reshape's xshape with transpose's xshape --- Running IR pass [matmul_transpose_reshape_fuse_pass] --- Fused 0 MatmulTransposeReshape patterns for matmul Op --- Running IR pass [matmul_v2_transpose_reshape_fuse_pass] --- Fused 0 MatmulTransposeReshape patterns for matmul_v2 Op --- Running IR pass [batch_norm_act_fuse_pass] --- fused 0 batch norm with relu activation --- Running IR pass [softplus_activation_mkldnn_fuse_pass] --- fused 0 softplus with relu activation --- fused 0 softplus with tanh activation --- fused 0 softplus with leaky_relu activation --- fused 0 softplus with swish activation --- fused 0 softplus with hardswish activation --- fused 0 softplus with sqrt activation --- fused 0 softplus with abs activation --- fused 0 softplus with clip activation --- fused 0 softplus with gelu activation --- fused 0 softplus with relu6 activation --- fused 0 softplus with sigmoid activation --- Running IR pass [elt_act_mkldnn_fuse_pass] I0616 17:04:53.007876 1718 fuse_pass_base.cc:57] --- detected 12 subgraphs --- fused 12 elementwise_add with relu activation --- fused 0 elementwise_add with tanh activation --- fused 0 elementwise_add with leaky_relu activation --- fused 0 elementwise_add with swish activation --- fused 0 elementwise_add with hardswish activation --- fused 0 elementwise_add with sqrt activation --- fused 0 elementwise_add with abs activation --- fused 0 elementwise_add with clip activation --- fused 0 elementwise_add with gelu activation --- fused 0 elementwise_add with relu6 activation --- fused 0 elementwise_add with sigmoid activation --- fused 0 elementwise_sub with relu activation --- fused 0 elementwise_sub with tanh activation --- fused 0 elementwise_sub with leaky_relu activation --- fused 0 elementwise_sub with swish activation --- fused 0 elementwise_sub with hardswish activation --- fused 0 elementwise_sub with sqrt activation --- fused 0 elementwise_sub with abs activation --- fused 0 elementwise_sub with clip activation --- fused 0 elementwise_sub with gelu activation --- fused 0 elementwise_sub with relu6 activation --- fused 0 elementwise_sub with sigmoid activation --- fused 0 elementwise_mul with relu activation --- fused 0 elementwise_mul with tanh activation --- fused 0 elementwise_mul with leaky_relu activation --- fused 0 elementwise_mul with swish activation --- fused 0 elementwise_mul with hardswish activation --- fused 0 elementwise_mul with sqrt activation --- fused 0 elementwise_mul with abs activation --- fused 0 elementwise_mul with clip activation --- fused 0 elementwise_mul with gelu activation --- fused 0 elementwise_mul with relu6 activation --- fused 0 elementwise_mul with sigmoid activation --- Running analysis [ir_params_sync_among_devices_pass] --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [ir_graph_to_program_pass] I0616 17:04:53.053191 1718 analysis_predictor.cc:1035] ======= optimize end ======= I0616 17:04:53.055065 1718 naive_executor.cc:102] --- skip [feed], feed -> trans_cam_to_img I0616 17:04:53.055071 1718 naive_executor.cc:102] --- skip [feed], feed -> images I0616 17:04:53.055073 1718 naive_executor.cc:102] --- skip [feed], feed -> down_ratios I0616 17:04:53.056514 1718 naive_executor.cc:102] --- skip [concat_13.tmp_0], fetch -> fetch 148 94 97 101 105 109 terminate called after throwing an instance of 'phi::enforce::EnforceNotMet' what():


C++ Traceback (most recent call last):


Error Message Summary:

InvalidArgumentError: The input of Op(Conv) should be a 4-D or 5-D Tensor. But received: input's dimension is 2, input's shape is [1, 9]. [Hint: Expected in_dims.size() == 4 || in_dims.size() == 5 == true, but received in_dims.size() == 4 || in_dims.size() == 5:0 != true:1.] (at /paddle/paddle/fluid/operators/conv_op.cc:71) [operator < conv2d > error] 已放弃 (核心已转储)

FunnyWii commented 8 months ago

俺也一样,请问最后咋解决的

nepeplwu commented 8 months ago

@hunagjingwei @FunnyWii 感谢反馈,该问题已经修复,详见:https://github.com/PaddlePaddle/Paddle3D/commit/9df597e381d796f0245815472dbe644cdcec448a,请拉取最新代码试试