Closed amalnanavati closed 1 year ago
Food segmentation and face detection use the compressed RGB image and aligned depth images.
Only food-on-fork detection uses the raw depth image, and that can be moved to aligned depth now that we have addressed the aligned depth unstability issue.
Update: On t0b1
, we can have 4 concurrent subscribers each of /camera/color/image_raw/compressed
and /camera/depth/image_rect_raw
without issues (untested whether issues start with more than this).
On the other hand, with a republisher on t0b1
(#49 ), 4 concurrent subscribers each run at >= 19Hz, which is a 4x speedup for depth images and a 1.33x speedup for color images.
Our RealSense is running on a Jetson Nano. On the Nano itself, we can run at least 4 concurrent
ros2 topic echo /camera/color/image_raw
andros2 topic echo /camera/depth/image_rect_raw
each and not have issues.On a computer other than the Nano, we can have one concurrent subscriber each. As soon as we add the second subscriber to the color image, all subscribers to the depth image stop receiving images. Switching to compressed images appears to make the problem better, but it doesn't go away entirely.
This may be related to this comment, although our
librealsense
is configured for CUDA (I verified that GPU utilization increases when we launch the realsense nodes).One potential way to address this is to create a republisher that is only subscribed to by nodes running on the same non-Nano machine. However, if the republisher subscribes to too many different camera topics (e.g., color raw, color compressed, depth raw, aligned depth raw) the subscribers in the republisher itself stop receiving images.
Hence, we need to converge upon which ~2 topics all our perception nodes will use (probably compressed color and aligned depth) and develop a republisher for just those. Further, we should consider developing the republisher node such that it only publishers images if there is a subscriber, else doesn't (to save compute power).