marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

ERROR from primary_gie: Failed to create NvDsInferContext instance #504

Open IronmanVsThanos opened 5 months ago

IronmanVsThanos commented 5 months ago

When the yoloV8 is quantified INT8,The following error occurs:

`Using winsys: x11 
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  open error
0:00:02.290955317  5873   0x7f14001f80 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  failed
0:00:02.309668932  5873   0x7f14001f80 WARN                 nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/model_b1_gpu0_int8.engine  failed, try rebuild
0:00:02.310000198  5873   0x7f14001f80 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in 'NvDsInferCreateNetwork' implementation
WARNING: [TRT]: onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

File does not exist: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/calib.table
Failed to read image for calibration
ERROR: [TRT]: 1: Unexpected exception _Map_base::at
Building engine failed

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:07.140146573  5873   0x7f14001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:07.159911905  5873   0x7f14001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:07.160046945  5873   0x7f14001f80 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:07.160156834  5873   0x7f14001f80 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:07.160200386  5873   0x7f14001f80 WARN                 nvinfer gstnvinfer.cpp:841:gst_nvinfer_start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:707>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(841): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed`
My environment configuration:
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 8.2
cuDNN Version: 8.2
libNVWarp360 Version: 2.0.1d3

pytorch:1.9.1 cu111 
onnx runtime :1.16.3
onnxsim :0.4.35
onnx opset:12/11
convert to onnx command: python3 export_yolov8.py -w ./best.pt -s 416 --simplify

@marcoslucianops please help me thank U!!

IronmanVsThanos commented 5 months ago

@pullmyleg

IronmanVsThanos commented 5 months ago

my config:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=best.onnx
model-engine-file=model_b1_gpu0_int8.engine 
int8-calib-file=calib.table
labelfile-path=labels_team.txt
batch-size=1
network-mode=1
num-detected-classes=10
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=3
uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=5
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV8.txt

[tests]
file-loop=0
pullmyleg commented 5 months ago

@IronmanVsThanos we are using this and it works. Have you created this: File does not exist: /opt/nvidia/deepstream/deepstream-6.0/sources/objectDetector_Yolo/DeepStream-Yolo-master-int8-test/calib.table

IronmanVsThanos commented 5 months ago

No, did you successfully quantify yoloV8 for int8 on deepstream6.0?@pullmyleg

IronmanVsThanos commented 5 months ago

can U help me?thank u @marcoslucianops

ZouJiu1 commented 2 months ago

calib.table is a augment to point out the output path, not a input path. Instead, you should prepare the coco images to calibration. If you prepare the images and follow the instruction, the warning will disappear. https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/#deepstream-configuration-for-yolov8

marcoslucianops commented 2 months ago

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/INT8Calibration.md