dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
879 stars 257 forks source link

how to run custom detectnet model #20

Closed inderpreetsingh01 closed 4 years ago

inderpreetsingh01 commented 4 years ago

I am having a detectnet model trained to detect the object of my interest, I am able to use the default models like pednet but not sure how to give the path of my deploy.prototxt file and model name.

Any suggestion is highly appreciated. Thanks,

hoangcuongbk80 commented 4 years ago

@inderpreetsingh01 Did you solve the problem? I had my custom model too and tried to run: roslaunch ros_deep_learning detectnet.ros1.launch model_path:=/home/ekobot/jetson-inference/python/training/detection/ssd/backup/SSD-MobileNet_20200809_02/ssd-mobilenet.onnx class_labels_path:=/home/ekobot/jetson-inference/python/training/detection/ssd/backup/SSD-MobileNet_20200809_02/labels.txt input:=/home/ekobot/Downloads/sample-mp4-file.mp4 output:=display://0

I got the error:

[TRT] INVALID_ARGUMENT: Cannot find binding of given name: [TRT] failed to find requested input layer in network [TRT] device GPU, failed to create resources for CUDA engine [TRT] failed to create TensorRT engine for /home/ekobot/jetson-inference/python/training/detection/ssd/backup/SSD-MobileNet_20200809_02/ssd-mobilenet.onnx, device GPU [TRT] detectNet -- failed to initialize. [ERROR] [1598110452.218353290]: failed to load detectNet model

Any suggestion is highly appreciated. @dusty-nv Thanks,

zzjkf2009 commented 3 years ago

Same error. Have you ever got the solution?