dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
887 stars 258 forks source link

load custom onnx model, failed to convert bgr8 to rgb8 #47

Closed indra4837 closed 4 years ago

indra4837 commented 4 years ago

Getting the following issue when running a custom yolov4 model with detectnet node.

Running on Jetson-Tx2

[TRT] engine.cpp (986) - Cuda Error in executeInternal: 719 (unspecified launch failure) [TRT] FAILED_EXECUTION: std::exception [TRT] failed to execute TensorRT context on device GPU [ERROR] [1600442843.225676073]: failed to run object detection on 640x360 image [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaRGB.cu:60 [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaColorspace.cpp:225 [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:141 [ERROR] [1600442843.226765144]: failed to convert 640x360 image (from bgr8 to rgb8) with CUDA [ INFO] [1600442843.226885850]: failed to convert 640x360 bgr8 image [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaRGB.cu:60 [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/Documents/jetson-inference/utils/cuda/cudaColorspace.cpp:225 [cuda] unspecified launch failure (error 719) (hex 0x2CF) [cuda] /home/jetson-indra/catkin_ws/src/ros_deep_learning/src/image_converter.cpp:141 [ERROR] [1600442843.227861639]: failed to convert 640x360 image (from bgr8 to rgb8) with CUDA

Any idea how to solve this issue? Thanks!

dusty-nv commented 4 years ago

YOLO isn't explicitly supported in the detectNet code, it would require addition pre/post-processing code to make it work.

indra4837 commented 4 years ago

@dusty-nv is there a way to load processed .trt models for the detectnet? I have done the necessary pre/post-processing and converted the onnx file to a TensorRT model

dusty-nv commented 4 years ago

is there a way to load processed .trt models for the detectnet?

There is, but it is pretty deep in the depths of the code, here: https://github.com/dusty-nv/jetson-inference/blob/12061e0a778ddc237bc153f81465488fe2539742/c/tensorNet.h#L303

That isn't usually used externally, but rather from inside detectNet, imageNet, or segNet when loading a model.

Further, the pre/post-processing I was alluding to wasn't to convert the model to TRT, but the runtime pre/post-processing of the input and output tensors. I am not sure what format YOLO expects (i.e. if mean pixel subtraction/normalization is applied, if it uses BGR or RGB input, as well as how the raw detection results are formatted in the network output)

indra4837 commented 4 years ago

@dusty-nv Ahh, i see. I managed to use the NVIDIA's TensorRT samples to create a ROS node for yolo. Thanks for your help!

ghost commented 3 years ago

Hi @dusty-nv

So I've read through several issues where you repeatedly clarify this and the jetson-inference repo do not support YOLO. Forgive me for beating a dead horse, but long term is there any chance of a ROS node like this being supported for YOLO inference engine at some point?

For now I can muddle along with my own custom ROS node, but long term I'm curious about NVidia's roadmap. As pointed out above, NVidia gives clear instruction on combining TensorRt and Yolo, so having ROS support with Yolo inference engine would be superb performance-wise (for >>30 FPS object detection use-cases).

dusty-nv commented 3 years ago

is there any chance of a ROS node like this being supported for YOLO inference engine at some point?

I don't personally plan on creating a ROS node for TensorRT-accelerated YOLO, so you may want to continue with your efforts. It gets to be a lot over time to maintain different detection architectures for training+inference, so I prefer to just to keep SSD-Mobilenet for now instead of the now-numerous YOLO variants.