Open murthax opened 4 years ago
...same here, I followed the example:
https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
...and get the same result/message as mentioned above.
I don't know if it's important, but when the TRT loads it throws an error and continues:
...
ros2 launch ros_deep_learning detectnet.ros2.launch model_path:=/home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx class_labels_path:=home/magneto/jetson-inference/python/pytorch-ssd/test/labels.txt input:=csi://0 output:=display://0
Like you did in the detectnet command line, you also need to set these ROS params to use custom detection model:
input_blob
output_cvg
output_bbox
For more info of the node parameters, see here: https://github.com/dusty-nv/ros_deep_learning#detectnet-node-1
It works! Thanks for your help.
BTW, one thing took me a while to find out: when exporting a "Pascal VOC 1.1" dataset from CVAT, one gets an incompatible label file in terms of filename (labelmap.txt
instead of labels.txt
) and content:
# label:color_rgb:parts:actions
background:0,0,0::
green cone:128,0,0::
renaming and changing the content to simply
green cone
did the trick.
Thanks for the great tool you provided us with......!
Ah snap! Thanks for replying. It's working.
Hi,any one knows what should those params be for Yolo3. It tried some, but they are not right. input_blob output_cvg *output_bbox
Hi,any one knows what should those params be for Yolo3. It tried some, but they are not right. input_blob output_cvg *output_bbox
Good question, how did you approach the task of using YOLO with this node?
Check this post out. There was another one I had seen (on the jetson-inference github) but the message was basically the same
https://forums.developer.nvidia.com/t/how-to-deploy-yolov5-on-jetson-inferences-detectnet/155616
HI All,
Running the latest Jetpack on an Xavier NX. I had used https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-collect-detection.md to train a custom dataset. That seems OK.
If I run: NET=~/jetson-inference/python/pytorch-ssd/test
detectnet --model=$NET/ssd-mobilenet.onnx --labels=$NET/labels.txt \ --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \ csi://0
This works properly.
I'm now trying to use this with ros deep learning. If I try a command like this:
ros2 launch ros_deep_learning detectnet.ros2.launch model_path:=/home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx class_labels_path:=home/magneto/jetson-inference/python/pytorch-ssd/test/labels.txt input:=csi://0 output:=display://0
I get errors as seen here:
[detectnet-2] [TRT] INVALID_ARGUMENT: Cannot find binding of given name: [detectnet-2] [ERROR] [detectnet]: failed to load detectNet model [detectnet-2] [TRT] failed to find requested input layer in network [detectnet-2] [TRT] device GPU, failed to create resources for CUDA engine [detectnet-2] [TRT] failed to create TensorRT engine for /home/magneto/jetson-inference/python/pytorch-ssd/test/ssd-mobilenet.onnx, device GPU [detectnet-2] [TRT] detectNet -- failed to initialize. [INFO] [detectnet-2]: process has finished cleanly [pid 30902] [detectnet-2]
I am clearly missing something. Any help is appreciated.