dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
862 stars 258 forks source link

use two camera with ros_deep_learning #134

Open kleanbotmk2 opened 5 months ago

kleanbotmk2 commented 5 months ago

Hello,

I manage to use ros_deep_learing with ros2 foxy using a usb camera (by the way, is it normal that I get 9 fps on average instead of 45 fps without ros2?). I'd like to know how I could use two usb cameras to do two detections at the same time.

I've already tried to remap the topics but I couldn't do it.

Could you help me?

Thanks in advance,

Sacha

kleanbotmk2 commented 4 months ago

Any help ?

dusty-nv commented 4 months ago

Hi @kleanbotmk2, I don't think this is so much a question specific to ros_deep_learning as it is with setting up your ROS2 launch files. For two cameras, you would make two video_source nodes. They can both share the same detectnet node though, because detectnet is stateless and doesn't care where each independent image comes from. That will also avoid you from re-loading the detection DNN twice.

kleanbotmk2 commented 4 months ago

Hi @dusty-nv and thanks for your reply. To be more precise I want to keep the 2 camera streams seperated from each other, because I want to perform simple stereo detection, with a Left and a Right Camera, so I need to have a BBox for the Left cam, and an BBox for the Right one... Then I intend to determine the depth of the recognised object from the bbox lateral speration in pixels... So I have to know where the BBox come from, Left or Right... If I subscribe to the 2 camera at the input, then feed the Detectnet as you suggest, this one will provide Bbox and overlay image through a single output, independant from the origin of the image, what I want to avoid... I do want to be able to publish two different outputs. Thanks again for you help

zaher88abd commented 4 months ago

Hi @kleanbotmk2,

I am working on similar problem. What I did, I used USB_CAM nodes to read from cameras. Then create a node read the cameras output topic. In my case, I need to switch between the cameras, so each time I am read frame from one of the cameras, I will publish the one I want to the output topic which is used as input to the model node. But, in your case, I think you could marge both of the frames in one image, then publish is it to by the output topic, which will be used as input to the model. You could check this post https://stackoverflow.com/questions/72690021/how-to-process-a-image-message-with-opencv-from-ros2.