PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.34k stars 2.84k forks source link

Inference with TensorRT 10: The TRT network should be initialized first. #9042

Open msarmiento3 opened 2 days ago

msarmiento3 commented 2 days ago

问题确认 Search before asking

Bug组件 Bug Component

Inference

Bug描述 Describe the Bug

Hello! I am trying to run paddle with TensorRT 10 and python 3.9. I have installed python 3.9 in the nvidia docker image nvcr.io/nvidia/paddlepaddle:24.05-py3. Then when I execute the following command I get this error:

CUDA_VISIBLE_DEVICES=0 python deploy/python/infer.py --model_dir=output_inference/ppyoloe_plus_crn_l_80e_coco --image_file=demo/000000014439_640x640.jpg --run_mode=trt_fp16 --device=gpu --run_benchmark=True
-----------  Running Arguments -----------
action_file: None
batch_size: 1
camera_id: -1
combine_method: nms
cpu_threads: 1
device: gpu
enable_mkldnn: False
enable_mkldnn_bfloat16: False
image_dir: None
image_file: /mnt/task_runtime/dataset/coco/val2017/000000421455.jpg
match_metric: ios
match_threshold: 0.6
model_dir:output_inference/ppyoloe_plus_crn_l_80e_coco 
output_dir: output
overlap_ratio: [0.25, 0.25]
random_pad: False
reid_batch_size: 50
reid_model_dir: None
run_benchmark: True
run_mode: trt_fp16
save_images: True
save_mot_txt_per_img: False
save_mot_txts: False
save_results: False
scaled: False
slice_infer: False
slice_size: [640, 640]
threshold: 0.5
tracker_config: None
trt_calib_mode: False
trt_max_shape: 1280
trt_min_shape: 1
trt_opt_shape: 640
use_coco_category: False
use_dark: True
use_gpu: False
video_file: None
window_size: 50
------------------------------------------
-----------  Model Configuration -----------
Model Arch: YOLO
Transform Order:
--transform op: Resize
--transform op: NormalizeImage
--transform op: Permute
--------------------------------------------
Traceback (most recent call last):
  File "/mnt/task_runtime/deploy/python/infer.py", line 1084, in <module>
    main()
  File "/mnt/task_runtime/deploy/python/infer.py", line 1018, in main
    detector = eval(detector_func)(
  File "/mnt/task_runtime/deploy/python/infer.py", line 107, in __init__
    self.predictor, self.config = load_predictor(
  File "/mnt/task_runtime/deploy/python/infer.py", line 932, in load_predictor
    predictor = create_predictor(config)
ValueError: (InvalidArgument) The TRT network should be initialized first.
  [Hint: Expected network() != nullptr == true, but received network() != nullptr:0 != true:1.] (at /paddle/paddle/fluid/inference/tensorrt/engine.cc:437)

Do you know what might be the problem here?

复现环境 Environment

Docker image: nvcr.io/nvidia/paddlepaddle:24.05-py3 with python 3.9 (not python 3.10)

-OS: Linux -Python 3.9 -PaddlePaddle 2.6.0 -PaddleDetection 2.6.0 -Cuda 12.4

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?