Open PX-Xu opened 1 year ago
@marcoslucianops Hope you could help us address this problem. Thank you a lot!!!
@marcoslucianops When I use the ONNX to generate the engine in the PC with the environment: DeepStream 6.0, TensorRT 8.4.2, CUDA 11.4, and cuDNN 8.2,(This PC's GPU is T4. I guess maybe something wrong with this GPU to generate engine?) there still had something wrong:
libEGL warning: DRI2: failed to authenticate
0:00:00.199772323 26614 0x561f8977a530 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:
ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:779: ERROR: ModelImporter.cpp:180 In function parseGraph:
[6] Invalid Node - Concat_393
Concat_393: concat input tensors 0 and 2 have incompatible types Float and Int32
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:315 Failed to parse onnx file
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:966 failed to build network since parsing model errors.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:05.434220771 26614 0x561f8977a530 ERROR nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:
I don't know how to solve this problem. I did the same procedure on my own computer with a GPU--RTX 3060. It would not meet this problem.(I could successfully generate the engine.)
Hope you could help us solve this problem. Thank you a lot!!!
@PX-Xu +1 @marcoslucianops Same here for RTX3060, FP16 or FP32 Deepstream 6.2 dGPU, previously converted via wts successfully. Is there some kind of type mismatch?
About the Concat
error, please use the updated export file. I adjusted it.
I have the same issue as well. I want to ask why I can't generate an engine on the Tesla T4 GPU using the method of converting weights to an engine. The error message I receive is the same as mentioned in this issue。[property] gpu-id=0 net-scale-factor=0.0039215697906911373 model-color-format=0 custom-network-config=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/yolov7.cfg model-file=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/yolov7_last.weights model-engine-file=/home/lzy/Deepstream_iTS-main/remote_gnr/model/66/model_b1_gpu0_fp16.engine
labelfile-path=/home/lzy/Deepstream_iTS-main/remote_gnr/model/labels.txt batch-size=1 network-mode=2 num-detected-classes=9 interval=0 gie-unique-id=1 process-mode=1 network-type=0 cluster-mode=2 maintain-aspect-ratio=1 symmetric-padding=1 parse-bbox-func-name=NvDsInferParseYolo custom-lib-path=/home/lzy/Deepstream_iTS-main/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all] nms-iou-threshold=0.45 pre-cluster-threshold=0.25 topk=300
This is the configuration file I am using.
@marcoslucianops Please help me, I also encountered the issue where I can use weights to convert to an engine on NVIDIA RTX series GPUs, but it doesn't work on T4 GPUs. My environment is similar to this
@lzylzylzy123456, the Concat
error is related to the ONNX model. You are using Darknet model. Can you send the output?
etworkDefinition&, nvinfer1::IBuilderConfig&) () from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
@marcoslucianops Thanks for your answer. The Concat error is solved. But I still couldn't generate the engine. The error is the following:
libEGL warning: DRI2: failed to authenticate
0:00:00.193488463 7059 0x56195ed95530 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:
It is like the fault that I met the first time when I used the darknet Model. Actually, we always used the Darknet model to generate the engine. But this time we met the fault. Could you answer the first question I proposed? Thank you a lot.
@lzylzylzy123456, to use Darknet model (weights
and cfg
files), you should use the config_infer_primary.txt
file (don't use the config_infer_primary_yoloV7.txt
file) as base config file.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov7.cfg
model-file=yolov7.weights
model-engine-file=model_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=0
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
@PX-Xu, can you debug the segmentation fault using gdb?
@marcoslucianops I debug it in the gdb. This is the result: Thread 1 "AppRun" received signal SIGSEGV, Segmentation fault. __strlen_avx2 () at ../sysdeps/x86_64/multiarch/strlen-avx2.S:65 65 ../sysdeps/x86_64/multiarch/strlen-avx2.S: No such file or directory. (gdb) bt
https://github.com/marcoslucianops/DeepStream-Yolo/issues/1 0x00007fffef994982 in _IO_puts (str=0x0) at ioputs.c:35 https://github.com/marcoslucianops/DeepStream-Yolo/issues/2 0x00007fffab415931 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/3 0x00007fffae1bd247 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/4 0x00007fffadf6d9a9 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/5 0x00007fffab7d2a70 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/6 0x00007fffab7da878 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/7 0x00007fffab7dae23 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/8 0x00007fffab917b5b in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/9 0x00007fffab917cc6 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/10 0x00007fffab948a26 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/11 0x00007fffab949ba1 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/12 0x00007fffab6ca613 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/13 0x00007fffab6fe28f in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/14 0x00007fffaba1da3f in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/15 0x00007fffab834b43 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/16 0x00007fffab822383 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/17 0x00007fffab8364f7 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/18 0x00007fffab6f76c4 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/19 0x00007fffab6fc3c3 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/20 0x00007fffaba33211 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/21 0x00007fffaba36817 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.8 https://github.com/marcoslucianops/DeepStream-Yolo/issues/22 0x00007fff822ec257 in nvinfer1::IBuilder::buildEngineWithConfig(nvinfer1::IN---Type to continue, or q to quit-----Type to continue, or q to quit--- etworkDefinition&, nvinfer1::IBuilderConfig&) () from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/23 0x00007fff822e0c7b in Yolo::createEngine(nvinfer1::IBuilder, nvinfer1::IBuilderConfig) () from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/24 0x00007fff822f4a56 in NvDsInferYoloCudaEngineGet () from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/25 0x00007fffce435562 in nvdsinfer::TrtModelBuilder::getCudaEngineFromCustomLib(bool ()(nvinfer1::IBuilder, _NvDsInferContextInitParams, nvinfer1::DataType, nvinfer1::ICudaEngine&), bool ()(nvinfer1::IBuilder, nvinfer1::IBuilderConfig, _NvDsInferContextInitParams const, nvinfer1::DataType, nvinfer1::ICudaEngine&), _NvDsInferContextInitParams const&, NvDsInferNetworkMode&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/26 0x00007fffce4359b4 in nvdsinfer::TrtModelBuilder::buildModel(_NvDsInferContextInitParams const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/27 0x00007fffce3f55e4 in nvdsinfer::NvDsInferContextImpl::buildModel(_NvDsInferContextInitParams&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/28 0x00007fffce3f62a1 in nvdsinfer::NvDsInferContextImpl::generateBackendContext(_NvDsInferContextInitParams&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/29 0x00007fffce3f053b in nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const, void)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/30 0x00007fffce3f6ce9 in createNvDsInferContext(INvDsInferContext*, _NvDsInferContextInitParams&, void, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const, void)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/31 0x00007fffd45677c1 in gst_nvinfer_start(_GstBaseTransform*) () from /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so https://github.com/marcoslucianops/DeepStream-Yolo/issues/32 0x00007fffe9bf6270 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/33 0x00007fffe9bf6505 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/34 0x00007ffff1e8c6ab in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/pull/35 0x00007ffff1e8d126 in gst_pad_set_active () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/pull/36 0x00007ffff1e6af0d in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/37 0x00007ffff1e7d884 in gst_iterator_fold () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/38 0x00007ffff1e6ba16 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/39 0x00007ffff1e6d95e in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/40 0x00007ffff1e6dc8f in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/41 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/42 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/43 0x00007ffff1e4da02 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/44 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/45 0x00007ffff1e70045 in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/46 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0 https://github.com/marcoslucianops/DeepStream-Yolo/issues/47 0x0000555555612e48 in StreamControl::init(std::vector<std::cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<std::cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, int, long, int, int) () https://github.com/marcoslucianops/DeepStream-Yolo/issues/48 0x0000555555679f39 in SmartDeviceControl::init_deep_pipeline() () https://github.com/marcoslucianops/DeepStream-Yolo/issues/49 0x000055555567d49e in SmartDeviceControl::init() () https://github.com/marcoslucianops/DeepStream-Yolo/issues/50 0x000055555556e678 in main ()
It seems like there is something wrong with the libnvdsinfer_custom_impl_Yolo.so. But I had already made in my PC.
Can you send your model to my email to check this error?
I have already sent the model and configuration files to you email.
I can generate the ENGINE using this configuration on RTX series graphics cards, as well as on the JETSON development board. However, I am unable to generate the ENGINE on the T4 graphics card of the cloud server.
I have discovered something: we have developed a plugin for DeepStream, and we encountered a memory leak issue. We used Valgrind for detection and initially suspected that it might be due to OpenCV or our custom code. However, after commenting out all the custom parts, we found that the plugin was still leaking memory. Therefore, we decided to investigate the unmodified plugin and confirmed that it indeed had a memory leak. The memory usage only increases and does not decrease.
I found a forum post (https://forums.developer.nvidia.com/t/memory-leak-in-deepstream/204163/9) where you also encountered this issue a year ago. Now, my environment is similar to yours, and I'm using DeepStream version 6.0.The memory leak issue occurs on every machine.
If you have any solutions, please let us know. Thank you very much!
Do you have any solutions? I tried reinstalling the environment and recompiling the dynamic library files, but I still can't generate an fp16 engine. However, I can generate an fp32 engine, but it is not effective for detection.Do you have a solution for the memory leaks in DeepStream? If so, please let me know. Thank you very much!
I just tested in T4 (AWS: g4dn.2xlarge
) with DeepStream 6.2 and I didn't get error in FP32 (77.74 FPS) or FP16 (185.19 FPS) modes using your model. I will try on DeepStream 6.0 now.
Which version is the Ubuntu you are using?
Thank you very much!!! I use Ubuntu 18.04.
@marcoslucianops Thank you for using the T4 test generation engine yourself. If you successfully generated the engine (FP16 and FP32), can you send me a copy by email. Thank you very much! ! ! We've been stuck with this problem for a week and would really appreciate your help in solving it!! !
The engine doesn't work in other computer (only if you use exactly the same enviroment, but it's recommended to generate it in each computer). I will try on DeepStream 6.0 and update you.
Thank you! Waiting for your good news!
Probably the problem is in your enviroment. I can run it on DeepStream 6.0.
I checked your first comment and you are using DeepStream 6.0, TensorRT 8.4.2, CUDA 11.4 and cuDNN 8.2.4. The requirements for DeepStreeam 6.0.1/6.0 are:
Please follow the instructions in DeepStream 6.0.1 / 6.0 (https://github.com/marcoslucianops/DeepStream-Yolo#dgpu-installation) to install the correct versions and try again.
OK. I will try to reinstall my environment. Thank you a lot!
I found a forum post (https://forums.developer.nvidia.com/t/memory-leak-in-deepstream/204163/9) where you also encountered this issue a year ago. Now, my environment is similar to yours, and I'm using DeepStream version 6.0.The memory leak issue occurs on every machine. Have you solved the issue of memory leakage in DeepStream? I have discovered that it is caused by a specific open-source plugin in DeepStream. Thank you very much!
Have you solved the memory leak issue in deepstream? If you have, please let us know the solution. Thank you very much. I'm using DeepStream version 6.0.
The NVIDIA says to use drop-on-latency
parameter to prevent it but it add some artifacts on the image depending on the latency difference between the source and the processing + detection + output. In my case, I created a code to drop the delayed buffers.
@marcoslucianops Thank you for your patience and detailed answer. We have successfully generated engines (FP16 and FP32) and were able to perform detection successfully. Summing up the following reasons that I think may cause us to fail to generate the engine or the generation engine to detect, (hope it can help others who encounter the same problem). We used to install the environment on RTX series graphics cards. When installing the T4 graphics card this time, we used the installation package on the RTX (including the graphics card driver, etc., which may be the reason for our error). The Deepstream installation package we used was not for the Tesla T4 (this may It is also the reason why our program runs wrong), the above is my review of this problem and the possible reasons for the issues I encountered. It is recommended that you install it according to the installation method provided by the author. I sincerely wish you smooth work and good health!
Hi! I met a question when I generated the engine for yolov7 in the GPU Tesla T4. The environment I configured is DeepStream 6.0, TensorRT 8.4.2, CUDA 11.4, and a version of cuDNN compatible with CUDA 11.4. Actually, we use the darknet to train our yolov7 model. And then we use the cfg file and weight file to generate the engine like it was used in the yolov3. We have successfully used this method to generate the engine in the RTX 3060,4080 etc. But, this time when we want to generate the engine in the Tesla T4, the engine(FP16) couldn't be generated. And the engine(FP32) could be generated, but we could not get the detecting result. We compile the dynamic library 'libnvdsinfer_custom_impl_Yolo.so' for this conversion on each new machine and place it in the corresponding folder of our project. We use the 'config_infer_primary_yoloV7.txt' file to specify the paths, which guides the engine generation process.
The error we met when generating the engine(FP16) is the following: etworkDefinition&, nvinfer1::IBuilderConfig&) () from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
23 0x00007fff822e0c7b in Yolo::createEngine(nvinfer1::IBuilder, nvinfer1::IBuilderConfig) ()
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
24 0x00007fff822f4a56 in NvDsInferYoloCudaEngineGet ()
from /home/dgp/ITS_code/its-deepstream/remote_gnr/model/libnvdsinfer_custom_impl_Yolo.so
25 0x00007fffce435562 in nvdsinfer::TrtModelBuilder::getCudaEngineFromCustomLib(bool ()(nvinfer1::IBuilder, _NvDsInferContextInitParams, nvinfer1::DataType, nvinfer1::ICudaEngine&), bool ()(nvinfer1::IBuilder, nvinfer1::IBuilderConfig, _NvDsInferContextInitParams const, nvinfer1::DataType, nvinfer1::ICudaEngine*&), _NvDsInferContextInitParams const&, NvDsInferNetworkMode&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
26 0x00007fffce4359b4 in nvdsinfer::TrtModelBuilder::buildModel(_NvDsInferContextInitParams const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator >&) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
27 0x00007fffce3f55e4 in nvdsinfer::NvDsInferContextImpl::buildModel(_NvDsInferContextInitParams&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
28 0x00007fffce3f62a1 in nvdsinfer::NvDsInferContextImpl::generateBackendContext(_NvDsInferContextInitParams&) ()
from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
29 0x00007fffce3f053b in nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const, void*)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
30 0x00007fffce3f6ce9 in createNvDsInferContext(INvDsInferContext*, _NvDsInferContextInitParams&, void, void ()(INvDsInferContext, unsigned int, NvDsInferLogLevel, char const, void)) () from ///opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infer.so
31 0x00007fffd45677c1 in gst_nvinfer_start(_GstBaseTransform*) ()
from /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
32 0x00007fffe9bf6270 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0
33 0x00007fffe9bf6505 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstbase-1.0.so.0
34 0x00007ffff1e8c6ab in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
35 0x00007ffff1e8d126 in gst_pad_set_active () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
36 0x00007ffff1e6af0d in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
37 0x00007ffff1e7d884 in gst_iterator_fold () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
38 0x00007ffff1e6ba16 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
39 0x00007ffff1e6d95e in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
40 0x00007ffff1e6dc8f in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
41 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
42 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
43 0x00007ffff1e4da02 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
44 0x00007ffff1e6fd5e in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
45 0x00007ffff1e70045 in gst_element_change_state () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
46 0x00007ffff1e70499 in ?? () from /home/dgp/ITS_code/its-deepstream/remote_gnr/lib/libgstreamer-1.0.so.0
47 0x0000555555612e48 in StreamControl::init(std::vector<std::cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std:: cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, std::vector<std::cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std:: cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, int, long, int, int) ()
48 0x0000555555679f39 in SmartDeviceControl::init_deep_pipeline() ()
49 0x000055555567d49e in SmartDeviceControl::init() ()
50 0x000055555556e678 in main ()”We have discovered that the code for generating the engine is causing the program to crash. How can we resolve this issue.
Hope you can help us to address this problem. Thank you a lot !!!