NVIDIA-AI-IOT / yolo_deepstream

yolo model qat and deploy with deepstream&tensorrt
Apache License 2.0
533 stars 135 forks source link

Using NHWC format instead of NCHW for deepstream #18

Open romil611 opened 2 years ago

romil611 commented 2 years ago

Hi, I’m the deepstream-test1 and yolo_deepstream app to use my own int8 tensorrt engine. The model takes input in NHWC format but the app expects input in the NCHW format. I had followed this link to get the int8 tensorRT engine. If deepstream doesn't work with NHWC, then please guide me to convert the engine/onnx/saved_model(tensorflow pb file) to NCHW.

Currently I'm getting this: INFO: [Implicit Engine Info]: layers num: 5 0 INPUT kFLOAT image_arrays:0 640x640x3 1 OUTPUT kINT32 num_detections 0 2 OUTPUT kFLOAT detection_boxes 1024x4 3 OUTPUT kFLOAT detection_scores 1024 4 OUTPUT kFLOAT detection_classes 1024

0:00:08.703674449 4915 0x558fe32520 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() [UID = 1]: RGB/BGR input format specified but network input channels is not 3

• Hardware Platform Jetson • DeepStream Version 5.0 • JetPack Version 4.4 • TensorRT Version 7.2 • Issue Type questions • Jetson NX