dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
866 stars 258 forks source link

Anyway to integrate my custom model trained ? #73

Open FirasAbdennadher opened 3 years ago

FirasAbdennadher commented 3 years ago

I am looking to integrate my custom model (instance segmentation)

dusty-nv commented 3 years ago

Hi @FirasAbdennadher, instance segmentation is different than the semantic segmentation that is supported in the underlying jetson-inference library, so you would need to add support for it. Typically that means the pre/post processing code. Also you would need your model exported to ONNX and check that TensorRT can load it (you can quickly check that with the trtexec tool found under /usr/src/tensorrt/bin)

If you are using PyTorch, you also may want to check out this node which uses torch2trt: https://github.com/NVIDIA-AI-IOT/ros2_torch_trt

FirasAbdennadher commented 3 years ago

@dusty-nv Hello , and thank you for this repo and for ur answer , iv trained my custom model based on this official repository in GitHub (matterport/Mask_RCNN:) and tested in my laptop and worked fine, btw, am newbie in nano Jetson, and am confused how to can i test the model on nano jetson .. thanks in advance.

MrOCW commented 3 years ago

Hi @dusty-nv, I have a custom detectnet trained with TLT, how do i interface the TRT related files with ROS?