dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
866 stars 258 forks source link

Running a Retrained Model with ros node #50

Open murthax opened 3 years ago

murthax commented 3 years ago

HI All,

Running the latest Jetpack on an Xavier NX. I had used https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect-detection.md to train a custom dataset. That seems OK.

If I run: NET=~/jetson-inference/python/pytorch-ssd/test

detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \ --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \ csi://0

This works properly.

I'm now trying to use this with ros deep learning. If I try a command like this:

ros2 launch ros_deep_learning detectnet.ros2.launch model_path:=/home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx class_labels_path:=home/magneto/jetson-inference/python/pytorch-ssd/test/labels.txt input:=csi://0 output:=display://0

I get errors as seen here:

[detectnet-2] [TRT] INVALID_ARGUMENT: Cannot find binding of given name: [detectnet-2] [ERROR] [detectnet]: failed to load detectNet model [detectnet-2] [TRT] failed to find requested input layer in network [detectnet-2] [TRT] device GPU, failed to create resources for CUDA engine [detectnet-2] [TRT] failed to create TensorRT engine for /home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx, device GPU [detectnet-2] [TRT] detectNet -- failed to initialize. [INFO] [detectnet-2]: process has finished cleanly [pid 30902] [detectnet-2]

I am clearly missing something. Any help is appreciated.

WaldoPepper commented 3 years ago

...same here, I followed the example:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md

...and get the same result/message as mentioned above.

I don't know if it's important, but when the TRT loads it throws an error and continues:

...... [TRT] TensorRT version 7.1.3 [TRT] loading NVIDIA plugins... ...... ...... [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 <---- Marked in red as an error --- [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 ......

dusty-nv commented 3 years ago

ros2 launch ros_deep_learning detectnet.ros2.launch model_path:=/home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx class_labels_path:=home/magneto/jetson-inference/python/pytorch-ssd/test/labels.txt input:=csi://0 output:=display://0

Like you did in the detectnet command line, you also need to set these ROS params to use custom detection model:

For more info of the node parameters, see here: https://github.com/dusty-nv/ros_deep_learning#detectnet-node-1

WaldoPepper commented 3 years ago

It works! Thanks for your help.

image

BTW, one thing took me a while to find out: when exporting a "Pascal VOC 1.1" dataset from CVAT, one gets an incompatible label file in terms of filename (labelmap.txt instead of labels.txt) and content:

# label:color_rgb:parts:actions
background:0,0,0::
green cone:128,0,0::

renaming and changing the content to simply

green cone

did the trick.

Thanks for the great tool you provided us with......!

murthax commented 3 years ago

Ah snap! Thanks for replying. It's working.

zzjkf2009 commented 3 years ago

Hi,any one knows what should those params be for Yolo3. It tried some, but they are not right. input_blob output_cvg *output_bbox

WaldoPepper commented 3 years ago

Hi,any one knows what should those params be for Yolo3. It tried some, but they are not right. input_blob output_cvg *output_bbox

Good question, how did you approach the task of using YOLO with this node?

murthax commented 3 years ago

Check this post out. There was another one I had seen (on the jetson-inference github) but the message was basically the same

https://forums.developer.nvidia.com/t/how-to-deploy-yolov5-on-jetson-inferences-detectnet/155616