marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 357 forks source link

error: invalid initialization of non-const reference of type ‘nvinfer1::ILogger&’ from an rvalue of type ‘nvinfer1::ILogger*’ #368

Closed huihui308 closed 1 year ago

huihui308 commented 1 year ago

There is a error when I compile nvdsinfer_custom_impl_Yolo. The platform I running is nvidia xaiver, jpack is 4.6.

$ CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
make: Entering directory '/home/lcfc/david/code/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo'
g++ -c  -o yolo.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-10.2/include yolo.cpp
yolo.cpp: In member function ‘nvinfer1::ICudaEngine* Yolo::createEngine(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*)’:
yolo.cpp:80:60: error: invalid initialization of non-const reference of type ‘nvinfer1::ILogger&’ from an rvalue of type ‘nvinfer1::ILogger*’
     parser = nvonnxparser::createParser(*network, getLogger());
                                                   ~~~~~~~~~^~
In file included from yolo.cpp:26:0:
/usr/include/aarch64-linux-gnu/NvOnnxParser.h:242:17: note: in passing argument 2 of ‘nvonnxparser::IParser* nvonnxparser::{anonymous}::createParser(nvinfer1::INetworkDefinition&, nvinfer1::ILogger&)’
 inline IParser* createParser(nvinfer1::INetworkDefinition& network, nvinfer1::ILogger& logger)
                 ^~~~~~~~~~~~
yolo.cpp:207:83: warning: ‘nvinfer1::ICudaEngine* nvinfer1::IBuilder::buildEngineWithConfig(nvinfer1::INetworkDefinition&, nvinfer1::IBuilderConfig&)’ is deprecated [-Wdeprecated-declarations]
   nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
                                                                                   ^
In file included from /usr/include/aarch64-linux-gnu/NvOnnxParser.h:26:0,
                 from yolo.cpp:26:
/usr/include/aarch64-linux-gnu/NvInfer.h:7990:43: note: declared here
     TRT_DEPRECATED nvinfer1::ICudaEngine* buildEngineWithConfig(
                                           ^~~~~~~~~~~~~~~~~~~~~
Makefile:81: recipe for target 'yolo.o' failed
make: *** [yolo.o] Error 1
marcoslucianops commented 1 year ago

Which DeepStream version are you using?

huihui308 commented 1 year ago

Deepstream is 6.0. The platform I running is nvidia xaiver, jpack is 4.6.

marcoslucianops commented 1 year ago

Did you change the yolo.cpp file?

This line

parser = nvonnxparser::createParser(*network, getLogger());

It should be

parser = nvonnxparser::createParser(*network, *builder->getLogger());
huihui308 commented 1 year ago

Did you change the yolo.cpp file?

This line

parser = nvonnxparser::createParser(*network, getLogger());

It should be

parser = nvonnxparser::createParser(*network, *builder->getLogger());

No, I did not. The log is as followings.

$ CUDA_VER=10.2 make -C nvdsinfer_custom_impl_Yolo
make: Entering directory '/home/lcfc/david/code/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo'
g++ -c  -o yolo.o -Wall -std=c++11 -shared -fPIC -Wno-error=deprecated-declarations -I/opt/nvidia/deepstream/deepstream/sources/includes -I/usr/local/cuda-10.2/include yolo.cpp
yolo.cpp: In member function ‘nvinfer1::ICudaEngine* Yolo::createEngine(nvinfer1::IBuilder*, nvinfer1::IBuilderConfig*)’:
yolo.cpp:80:61: error: ‘class nvinfer1::IBuilder’ has no member named ‘getLogger’
     parser = nvonnxparser::createParser(*network, *builder->getLogger());
                                                             ^~~~~~~~~
yolo.cpp:207:83: warning: ‘nvinfer1::ICudaEngine* nvinfer1::IBuilder::buildEngineWithConfig(nvinfer1::INetworkDefinition&, nvinfer1::IBuilderConfig&)’ is deprecated [-Wdeprecated-declarations]
   nvinfer1::ICudaEngine* engine = builder->buildEngineWithConfig(*network, *config);
                                                                                   ^
In file included from /usr/include/aarch64-linux-gnu/NvOnnxParser.h:26:0,
                 from yolo.cpp:26:
/usr/include/aarch64-linux-gnu/NvInfer.h:7990:43: note: declared here
     TRT_DEPRECATED nvinfer1::ICudaEngine* buildEngineWithConfig(
                                           ^~~~~~~~~~~~~~~~~~~~~
Makefile:81: recipe for target 'yolo.o' failed
make: *** [yolo.o] Error 1
make: Leaving directory '/home/lcfc/david/code/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo'
huihui308 commented 1 year ago

Did you change the yolo.cpp file?

This line

parser = nvonnxparser::createParser(*network, getLogger());

It should be

parser = nvonnxparser::createParser(*network, *builder->getLogger());

I have solved this issue by modifing:

Logger logger;
parser = nvonnxparser::createParser(*network, logger.getTRTLogger());

And add Logger definition out of this function.

class Logger : public nvinfer1::ILogger
{
public:
    Logger(Severity severity = Severity::kWARNING)
    {
        severity = severity;
    }

    ~Logger() = default;

    nvinfer1::ILogger& getTRTLogger()
    {
        return *this;
    }

    void log(nvinfer1::ILogger::Severity severity, const char* msg) noexcept override
    {
        // suppress info-level messages
        if (severity == Severity::kINFO) return;

        switch (severity)
        {
        case Severity::kINTERNAL_ERROR: std::cerr << "INTERNAL_ERROR: " << msg << std::endl; break;
        case Severity::kERROR: std::cerr << "ERROR: " << msg << std::endl; break;
        case Severity::kWARNING: std::cerr << "WARNING: " << msg << std::endl; break;
        case Severity::kINFO: std::cerr << "INFO: " << msg << std::endl; break;
        case Severity::kVERBOSE: break;
      //  default: std::cerr <<"UNKNOW:"<< msg << std::endl;break;
        }
    }
};

And then I can compile the lib success.

But there is another issue when I running.

$ deepstream-app -c deepstream_app_config.txt                            
ERROR: Deserialize engine failed because file path: /home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:01.238592781 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:644:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:01.238723411 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:644:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.238756373 31493   0x557cc062c0 INFO                 nvinfer gstnvinfer.cpp:647:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: ModelImporter.cpp:720: While parsing node number 239 [Range -> "/0/model.22/Range_output_0"]:
ERROR: ModelImporter.cpp:721: --- Begin node ---
ERROR: ModelImporter.cpp:722: input: "/0/model.22/Constant_8_output_0"
input: "/0/model.22/Cast_output_0"
input: "/0/model.22/Constant_9_output_0"
output: "/0/model.22/Range_output_0"
name: "/0/model.22/Range"
op_type: "Range"

ERROR: ModelImporter.cpp:723: --- End node ---
ERROR: ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:3172 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.513890898 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:01.514005976 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:01.514071931 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:01.514197730 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:850:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:01.514263813 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:850:gst_nvinfer_start:<primary_gie> error: Config file path: /home/lcfc/david/code/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:679>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(850): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/lcfc/david/code/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

How can I solved this problem? I run deepstream-6.0 on xavier jetson, the jpack is 4.6.

huihui308 commented 1 year ago

Did you change the yolo.cpp file? This line

parser = nvonnxparser::createParser(*network, getLogger());

It should be

parser = nvonnxparser::createParser(*network, *builder->getLogger());

I have solved this issue by modifing:

Logger logger;
parser = nvonnxparser::createParser(*network, logger.getTRTLogger());

And add Logger definition out of this function.

class Logger : public nvinfer1::ILogger
{
public:
  Logger(Severity severity = Severity::kWARNING)
  {
      severity = severity;
  }

  ~Logger() = default;

  nvinfer1::ILogger& getTRTLogger()
  {
      return *this;
  }

    void log(nvinfer1::ILogger::Severity severity, const char* msg) noexcept override
    {
        // suppress info-level messages
        if (severity == Severity::kINFO) return;

        switch (severity)
        {
        case Severity::kINTERNAL_ERROR: std::cerr << "INTERNAL_ERROR: " << msg << std::endl; break;
      case Severity::kERROR: std::cerr << "ERROR: " << msg << std::endl; break;
        case Severity::kWARNING: std::cerr << "WARNING: " << msg << std::endl; break;
        case Severity::kINFO: std::cerr << "INFO: " << msg << std::endl; break;
        case Severity::kVERBOSE: break;
      //  default: std::cerr <<"UNKNOW:"<< msg << std::endl;break;
        }
    }
};

And then I can compile the lib success.

But there is another issue when I running.

$ deepstream-app -c deepstream_app_config.txt                            
ERROR: Deserialize engine failed because file path: /home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine open error
0:00:01.238592781 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:644:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1889> [UID = 1]: deserialize engine from file :/home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed
0:00:01.238723411 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:644:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1996> [UID = 1]: deserialize backend context from engine from file :/home/lcfc/david/code/DeepStream-Yolo/model_b1_gpu0_fp32.engine failed, try rebuild
0:00:01.238756373 31493   0x557cc062c0 INFO                 nvinfer gstnvinfer.cpp:647:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
WARNING: onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: ModelImporter.cpp:720: While parsing node number 239 [Range -> "/0/model.22/Range_output_0"]:
ERROR: ModelImporter.cpp:721: --- Begin node ---
ERROR: ModelImporter.cpp:722: input: "/0/model.22/Constant_8_output_0"
input: "/0/model.22/Cast_output_0"
input: "/0/model.22/Constant_9_output_0"
output: "/0/model.22/Range_output_0"
name: "/0/model.22/Range"
op_type: "Range"

ERROR: ModelImporter.cpp:723: --- End node ---
ERROR: ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:3172 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:01.513890898 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:01.514005976 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:01.514071931 31493   0x557cc062c0 ERROR                nvinfer gstnvinfer.cpp:641:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:01.514197730 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:850:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:01.514263813 31493   0x557cc062c0 WARN                 nvinfer gstnvinfer.cpp:850:gst_nvinfer_start:<primary_gie> error: Config file path: /home/lcfc/david/code/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
** ERROR: <main:679>: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie: Failed to create NvDsInferContext instance
Debug info: gstnvinfer.cpp(850): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie:
Config file path: /home/lcfc/david/code/DeepStream-Yolo/config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
App run failed

How can I solved this problem? I run deepstream-6.0 on xavier jetson, the jpack is 4.6.

I have solved this problem. I generate onnx file using this command.

python3 export_yoloV8.py -w best.pt --dynamic -s 640

Then the above issue occured. but when I removed '--dynamic' and the issue disappeared.

python3 export_yoloV8.py -w best.pt -s 640

The reason why this problem happed is the tensorrt not support dynamic on jpack4.6. The log is:

ERROR: ModelImporter.cpp:720: While parsing node number 239 [Range -> "/model.22/Range_output_0"]:
ERROR: ModelImporter.cpp:721: --- Begin node ---
ERROR: ModelImporter.cpp:722: input: "/model.22/Constant_8_output_0"
input: "/model.22/Cast_output_0"
input: "/model.22/Constant_9_output_0"
output: "/model.22/Range_output_0"
name: "/model.22/Range"
op_type: "Range"

ERROR: ModelImporter.cpp:723: --- End node ---
ERROR: ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:3172 In function importRange:
[8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!"

Could not parse the ONNX model

Failed to build CUDA engine

So I think we only can use a specified bath-size on 6.0.

marcoslucianops commented 1 year ago

I will fix the logger issue in the next update. About the second error, the version of the DeepStream (specifically the TensorRT version) you are using doesn't support --dynamic flag (due to INT64 weigths). Set the --batch in the ONNX exporter equal to the batch-size you will use on the DeepStream.

huihui308 commented 1 year ago

I will fix the logger issue in the next update. About the second error, the version of the DeepStream (specifically the TensorRT version) you are using doesn't support --dynamic flag (due to INT64 weigths). Set the --batch in the ONNX exporter equal to the batch-size you will use on the DeepStream.

Thank you. I appreciate it.