PaddlePaddle / PaddleSeg

Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
https://arxiv.org/abs/2101.06175
Apache License 2.0
8.56k stars 1.68k forks source link

human_pp_humansegv1_server_512x512_inference_model_with_softmax人像分割模型使用tensorrt加速报错 #3798

Open stephen-TT opened 1 week ago

stephen-TT commented 1 week ago

我的代码:

class Predictor:
    def __init__(self, args):
        self.args = args
        self.cfg = DeployConfig(args.config, args.vertical_screen)
        self.compose = T.Compose(self.cfg.transforms)

        pred_cfg = PredictConfig(self.cfg.model, self.cfg.params)
        # pred_cfg.disable_glog_info()
        if self.args.use_gpu:
            print(f'-----------------aaaa------------')
            pred_cfg.enable_use_gpu(200, 0)

        pred_cfg.enable_tensorrt_engine(
            workspace_size=1 << 30,
            max_batch_size=1, min_subgraph_size=5,
            precision_mode=paddle_infer.PrecisionType.Float32,
            use_static=False, use_calib_mode=False)

        min_input_shape = {"image": [1, 3, 10, 10]}
        max_input_shape = {"image": [1, 3, 1920, 1080]}
        opt_input_shape = {"image": [1, 3, 480, 320]}

        pred_cfg.set_trt_dynamic_shape_info(min_input_shape, max_input_shape, opt_input_shape)

        # 通过 API 获取 TensorRT 启用结果 - true
        print("Enable TensorRT is: {}".format(pred_cfg.tensorrt_engine_enabled()))
        print(f'-----------------aaaa------------')

        self.predictor = create_predictor(pred_cfg)
        # if self.args.test_speed:
        #     self.cost_averager = TimeAverager()

        if args.use_optic_flow:
            self.disflow = cv2.DISOpticalFlow_create(
                cv2.DISOPTICAL_FLOW_PRESET_ULTRAFAST)
            width, height = self.cfg.target_size()
            self.prev_gray = np.zeros((height, width), np.uint8)
            self.prev_cfd = np.zeros((height, width), np.float32)
            self.is_first_frame = True

报错日志

--------------args.use_gpu: True
-----------------aaaa------------
Enable TensorRT is: True
-----------------aaaa------------
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_clean_pass]e[0m
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [adaptive_pool2d_convert_global_pass]e[0m
e[32m--- Running IR pass [shuffle_channel_detect_pass]e[0m
e[32m--- Running IR pass [quant_conv2d_dequant_fuse_pass]e[0m
e[32m--- Running IR pass [delete_quant_dequant_op_pass]e[0m
e[32m--- Running IR pass [delete_quant_dequant_filter_op_pass]e[0m
e[32m--- Running IR pass [delete_weight_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [delete_quant_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [add_support_int8_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [preln_embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [skip_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [preln_skip_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
e[32m--- Running IR pass [unsqueeze2_eltwise_fuse_pass]e[0m
e[32m--- Running IR pass [trt_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [trt_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [trt_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [trt_map_matmul_v2_to_mul_pass]e[0m
e[32m--- Running IR pass [trt_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [trt_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
e[32m--- Running IR pass [tensorrt_subgraph_pass]e[0m
Exception in thread Thread-7:
Traceback (most recent call last):
  File "D:\programFiles\miniconda3\envs\motion_control\lib\threading.py", line 932, in _bootstrap_inner
    self.run()
  File "C:\Users\aoto\Desktop\MotionControlSystem_beijing-0828-seg-Solution\apps\human_seg_app\human_seg.py", line 153, in run
    predictor = Predictor(args)
  File "C:\Users\aoto\Desktop\MotionControlSystem_beijing-0828-seg-Solution\apps\human_seg_app\infer.py", line 120, in __init__
    self.predictor = create_predictor(pred_cfg)
ValueError: (InvalidArgument) some trt inputs dynamic shape info not set, check the INFO log above for more details.
  [Hint: Expected all_dynamic_shape_set == true, but received all_dynamic_shape_set:0 != true:1.] (at ..\paddle/fluid/inference/tensorrt/convert/op_converter.h:308)
liuhongen1234567 commented 1 week ago

您好,可以参考这个 issue。使用 disable_glog_info() 来查看具体的报错信息 https://github.com/PaddlePaddle/PaddleOCR/issues/3010