marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.4k stars 345 forks source link

Infer Context default input_layer is not a image[CHW] #377

Closed shao77622 closed 1 year ago

shao77622 commented 1 year ago

I convert pployoe+ model to onnx, then run deepstream python with docker. But there are errors ..

deepstream version:6.2 model: ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle cuda: 11.8 driver: 525.116.03

0:01:35.675769211 6318 0x7f32440474f0 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.2/sources/aiBox/model_b1_gpu0_fp32.engine successfully WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5 0 INPUT kFLOAT scale_factor 2 1 INPUT kFLOAT image 3x320x320 2 OUTPUT kFLOAT tmp_17 2100x4 3 OUTPUT kFLOAT max_0.tmp_0 2100x1 4 OUTPUT kFLOAT cast_0.tmp_0 2100x1

0:01:35.730364688 6318 0x7f32440474f0 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initInferenceInfo() [UID = 1]: Infer Context default input_layer is not a image[CHW] ERROR: nvdsinfer_context_impl.cpp:1270 Infer context initialize inference info failed, nvinfer error:NVDSINFER_TENSORRT_ERROR 0:01:35.738624678 6318 0x7f32440474f0 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:01:35.738635488 6318 0x7f32440474f0 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: ./deepstream/configs/roadway_pgie_config.txt, NvDsInfer Error: NVDSINFER_TENSORRT_ERROR

shao77622 commented 1 year ago

The nvdsinfer only support one input layer, why the ppyoloe+ onnx model have 2 input layer.

marcoslucianops commented 1 year ago

The nvdsinfer only support one input layer, why the ppyoloe+ onnx model have 2 input layer.

Yes but this repo supports 2 inputs from the PPYOLOE and PPYOLOE+.

shao77622 commented 1 year ago

The nvdsinfer only support one input layer, why the ppyoloe+ onnx model have 2 input layer.

Yes but this repo supports 2 inputs from the PPYOLOE and PPYOLOE+.

which repo? I use the master branch,can't work

marcoslucianops commented 1 year ago

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/PPYOLOE.md

shao77622 commented 1 year ago

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/PPYOLOE.md

I follow this readme exactly , and then I face this issue. Paddledetection version: 2.6, paddlepaddle version:2.4.2

marcoslucianops commented 1 year ago

There's something wrong in your export. I just export this model (ppyoloe_plus_crn_t_auxhead_320_60e_ppvehicle) to ONNX and it worked. The correct output from DeepStream should be:

image

In your case, the scale_fator is the first input. That's why you are getting the error.

shao77622 commented 1 year ago

I make a new python env , and do it all over again, it worked!