marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.39k stars 344 forks source link

generate calib.table #409

Closed Rm1n90 closed 11 months ago

Rm1n90 commented 12 months ago

Hi @marcoslucianops,

I followed the readme to generate the int8 engine file however Im getting an error related calib.table which does not exist and throws an error whereas The conversion for Fp16, 32 is working perfectly. I was able to generate the int8 engine (Yolov5) a couple of months ago before the major update of the repo but now I cannot anymore! I also removed everything and cloned again....

I get this error

WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/Documents/Dev/DeepStream-Yolo/model_b1_gpu0_int8.engine open error
0:00:01.159080019 17762 0x55de125e0d60 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/home/Documents/Dev/DeepStream-Yolo/model_b1_gpu0_int8.engine failed
0:00:01.210732571 17762 0x55de125e0d60 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/home/Documents/Dev/DeepStream-Yolo/model_b1_gpu0_int8.engine failed, try rebuild
0:00:01.210749108 17762 0x55de125e0d60 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:659 INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in 'NvDsInferCreateNetwork' implementation
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

File does not exist: /home/Documents/Dev/DeepStream-Yolo/calib.table
OpenCV is required to run INT8 calibrator

Failed to build CUDA engine
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:728 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:794 Failed to get cuda engine from custom library API
0:00:02.033758439 17762 0x55de125e0d60 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:02.086947288 17762 0x55de125e0d60 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:02.087010079 17762 0x55de125e0d60 ERROR                nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:02.087028919 17762 0x55de125e0d60 WARN                 nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:02.087033661 17762 0x55de125e0d60 WARN                 nvinfer gstnvinfer.cpp:888:gst_nvinfer_start:<primary_gie> error: Config file path: /home/Documents/Dev/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:716>: Failed to set pipeline to PAUSED
Quitting
nvstreammux: Successfully handled EOS for source_id=0
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/Documents/Dev/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

here is my config_infer_primary_yolov8.txt

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=yolov8s.onnx
model-engine-file=model_b1_gpu0_int8.engine
int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=1
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
#force-implicit-batch-dim=1
#workspace-size=1000
parse-bbox-func-name=NvDsInferParseYolo
#parse-bbox-func-name=NvDsInferParseYoloCuda
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

and this is the command I used to generate the .onnx file

python3 export_yoloV8.py -w yolov8s.pt --dynamic -s 1280 --simplify

I can confirm that I installed OpenCV 4.8.0.74 and compiled the nvdsinfer_custom_impl_Yolo with OpenCV. TensorRT: 8.5.2.2 CUDA: 11.8 GPU: 3090 RTX DeepStream: 6.2

any idea how can I solve it?

Thanks!!

marcoslucianops commented 11 months ago

Im getting an error related calib.table which does not exist

It will be created after the calibration process

OpenCV is required to run INT8 calibrator

Please run CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo clean and CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo to check if it will comṕile with OPENCV.

sergii-matiukha commented 11 months ago

I have the same problem, DeepStream 6.2/6.1.1/6.1 on Jetson platform

CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo done, same result

Rm1n90 commented 11 months ago

@sergii-matiukha, run CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo clean and CUDA_VER=11.8 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo as @marcoslucianops said and it should be fine! these commands worked for me.

sergii-matiukha commented 11 months ago

Thank you! Indeed, in my case it worked: CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo clean CUDA_VER=11.4 OPENCV=1 make -C nvdsinfer_custom_impl_Yolo

sergii-matiukha commented 11 months ago

Another question, what should be the images for calibrating a custom Yolov8 model size 1504 2016?

marcoslucianops commented 11 months ago

The images are resized to the network size during the calibration.