DanaHan / Yolov5-in-Deepstream-5.0

Describe how to use yolov5 in Deepstream 5.0
208 stars 56 forks source link

what are the commit head numbers of yolov5 and tensorrtx? #16

Open niuwenju opened 3 years ago

niuwenju commented 3 years ago

i have successfully convert .pt to .engine ,but when run LD_PRELOAD=./libcustomOp.so deepstream-app -c deepstream_app_config_yoloV5.txt got errors as follows

_ERROR: ld.so: object './libcustomOp.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.

Using winsys: x11 ERROR: [TRT]: INVALID_ARGUMENT: getPluginCreator could not find plugin YoloLayer_TRT version 1 ERROR: [TRT]: safeDeserializationUtils.cpp (323) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) ERROR: [TRT]: INVALID_STATE: std::exception ERROR: [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed. ERROR: Deserialize engine failed from file: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/yolov5s.engine 0:00:01.482911499 13417 0x2cc85690 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/yolov5s.engine failed 0:00:01.483029190 13417 0x2cc85690 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/yolov5s.engine failed, try rebuild 0:00:01.483071397 13417 0x2cc85690 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files Begin cuda engine build... Yolo type is not defined from config file name: ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:01.484345041 13417 0x2cc85690 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed 0:00:01.484397071 13417 0x2cc85690 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed 0:00:01.484426862 13417 0x2cc85690 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings 0:00:01.484761504 13417 0x2cc85690 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:01.484788383 13417 0x2cc85690 WARN nvinfer gstnvinfer.cpp:809:gst_nvinfer_start: error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED ** ERROR: : Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Failed to create NvDsInferContext instance Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(809): gst_nvinfer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie: Config file path: /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/config_infer_primary_yoloV5.txt, NvDsInfer Error: NVDSINFER_CONFIGFAILED App run failed

PareshKamble commented 3 years ago

Hi @niuwenju, I too am stuck at the same error since yesterday. Please share the solution if you get success in solving it. I even replaced the libcustomOp.so with libnvdsinfer_custom_impl_Yolo.so created recently. Getting the same error EXCEPT your _ERROR: ld.so: object './libcustomOp.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored since libnvdsinfer_custom_impl_Yolo.so is found.

NOTE: When I replaced both hardwswich.cu and yololayer.cu from tensorrt/yolov5 with that from this repository, I got the following error:

paresh@paresh:~/Documents/DeepStream/tensorrtx/yolov5/build$ make
-- Configuring done
-- Generating done
-- Build files have been written to: /home/Documents/DeepStream/tensorrtx/yolov5/build
[ 25%] Building NVCC (Device) object CMakeFiles/myplugins.dir/myplugins_generated_yololayer.cu.o
/home/Documents/DeepStream/one_more_try/tensorrtx/yolov5/yololayer.h(85): warning: function "nvinfer1::IPluginV2Ext::configurePlugin(const nvinfer1::Dims *, int, const nvinfer1::Dims *, int, const nvinfer1::DataType *, const nvinfer1::DataType *, const __nv_bool *, const __nv_bool *, nvinfer1::PluginFormat, int)" is hidden by "nvinfer1::YoloLayerPlugin::configurePlugin" -- virtual function override intended?

/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(9): error: no instance of overloaded function "nvinfer1::YoloLayerPlugin::YoloLayerPlugin" matches the specified type
/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(13): error: identifier "yolo1" is undefined
/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(14): error: identifier "yolo2" is undefined
/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(15): error: identifier "yolo3" is undefined
/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(149): error: no instance of constructor "nvinfer1::YoloLayerPlugin::YoloLayerPlugin" matches the argument list
/home/Documents/DeepStream/tensorrtx/yolov5/yololayer.cu(256): error: no instance of constructor "nvinfer1::YoloLayerPlugin::YoloLayerPlugin" matches the argument list

6 errors detected in the compilation of "/tmp/tmpxft_00005315_00000000-6_yololayer.cpp1.ii".
CMake Error at myplugins_generated_yololayer.cu.o.Debug.cmake:279 (message):
  Error generating file
  /home/Documents/DeepStream/tensorrtx/yolov5/build/CMakeFiles/myplugins.dir//./myplugins_generated_yololayer.cu.o

CMakeFiles/myplugins.dir/build.make:334: recipe for target 'CMakeFiles/myplugins.dir/myplugins_generated_yololayer.cu.o' failed
make[2]: *** [CMakeFiles/myplugins.dir/myplugins_generated_yololayer.cu.o] Error 1
CMakeFiles/Makefile2:104: recipe for target 'CMakeFiles/myplugins.dir/all' failed
make[1]: *** [CMakeFiles/myplugins.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

However, when I replaced only the hardswich.cu and NOT yololayer.cu, it ran smoothly, generated the engine file as well and helped reach the final stage.

@DanaHan please provide a viable solution / insight for this problem. Thanking you in anticipation!

niuwenju commented 3 years ago

please replace hardswich.cu and yololayer.cu in tensorrtx/yolov5, after make and dosudo ./yolov5 -s you will get 'yolov5s.engine' and 'libmyplugin.so' , and then run deepstream demo with LD_PRELOAD=<path to libmyplugin.so> deepstream-app -c <app-config>

niuwenju commented 3 years ago

or you can copy all files in this repository except Deepstream 5.0 in tensorrtx/yolov5

PareshKamble commented 3 years ago

Dear @niuwenju, thank you for your quick response.

Do we have to replace hardswich.cu and yololayer.cu in tensorrtx/yolov5 AFTER make and BEFORE doing sudo ./yolov5 -s or in some other chronology?

And what should be the app-config file? config_infer_primary_yoloV5.txt or deepstream_app_config_yoloV5.txt

Thanking you in anticipation!

niuwenju commented 3 years ago

replace hardswich.cu and yololayer.cu before cmake .. and app-config means deepstream_app_config_yoloV5.txt

PareshKamble commented 3 years ago

Dear @niuwenju, Thank you for your response. I managed to run the sample video.

rahulsharma11 commented 2 years ago

Hi, i am trying to run this with Deepstream6 with tensorRT8.2.0.

Just for information, If i copy hardswich.cu and yololayer.cu to tensorrtx/yolov5 then "make" is not happening. It through error with TensorRT.

But when i do not copy yololayer.cu but only hardswich.cu, then i am able to make and also able to run. Not sure why.