We have been aware of an issue where if there are too many subscribers to the compressed RGB and depth images on nano, then nano stops publishing on or more of the compressed image topics. With out earlier networking setup, this got triggered if there was more than one subscriber on lovelace for each of the RGB and depth image topics. With our new networked setup, it gets triggered even if there is just one subscriber on lovelace for each of the RGB and depth image topics. This is a problem because the minimal set of subscribers lovelace needs is one to the compressed depth image and one to the compressed RGB image.
This PR addresses that by adding a nano_bridge that essentially combines the RGB and depth topics into a single topic on the nano side, and then separates them into two topics again on the lovelace side. This means that we only have one large subscription going over WiFi -- the combined RGB + depth subscription -- which should alleviate the aforementioned issue.
Potential Future Improvements
To further remove the camera_info subscription from WiFi, we could add a dummy camera info node to the nano_bridge packet, potentially drawing from this dummy RealSense node.
If even this doesn't work and we have to fully get rid of ROS over WiFi, the sender should essentially open a UDP socket that it streams the data over, and the receiver should receive that and convert it back into ROS topics. That way, we'd have ROS running locally on both lovelace and nano, but not connecting them.
Testing procedure
[x] In sim mock: launch the demo, verify it works.
[x] In real:
[x] Launch the demo.
[x] Run ros2 topic hz /local/camera/color/image_raw/compressed. Ensure it makes sense, and log the value here: 5-7Hz
[x] Run ros2 topic hz /local/camera/aligned_depth_to_color/image_raw/compressedDepth. Ensure it makes sense, and log the value here: 5-7Hz
[x] Terminate the perception nodes.
[x] Run ros2 topic hz /nano_bridge/data. Ensure it makes sense, and log the value here: 9-14Hz
[x] Run ros2 topic bw /nano_bridge/data. Ensure it makes sense, and log the value here: 1.1-1.7MB/s
[x] Re-run the perception nodes.
[x] Eat multiple bites, verify it works as expected. (In particular, the way I got notified of the issue in the first place is that during face detection, the perception screen would keep logging warings because a depth image hadn't been received in several seconds, though RGB images kept getting received. Verify that error doesn't arise.)
Explanation of the above rates
On nano, /nano_bridge/data publishes at a rate of ~24 hz, which is perfect given that each image topic publishes at a rate of ~12 hz. However, on lovelace, /nano_bridge/data is received at a rate of ~14 hz, which must be due to router bandwidth. As a result, each of the images is received at a rate of ~7 hz.
Although unfortunate, that rate can be worked with. Whereas earlier, one of the topics would have 14 hz and the other would have 0, now both have 7, which is a much better outcome.
Before opening a pull request
[x] Format your code using black formatterpython3 -m black .
[x] Run your code through pylint and address all warnings/errors. The only warnings that are acceptable to not address is TODOs that should be addressed in a future PR. From the top-level ada_feeding directory, run: pylint --recursive=y --rcfile=.pylintrc ..
Description
In continued service of #73 .
We have been aware of an issue where if there are too many subscribers to the compressed RGB and depth images on
nano
, thennano
stops publishing on or more of the compressed image topics. With out earlier networking setup, this got triggered if there was more than one subscriber onlovelace
for each of the RGB and depth image topics. With our new networked setup, it gets triggered even if there is just one subscriber onlovelace
for each of the RGB and depth image topics. This is a problem because the minimal set of subscriberslovelace
needs is one to the compressed depth image and one to the compressed RGB image.This PR addresses that by adding a
nano_bridge
that essentially combines the RGB and depth topics into a single topic on thenano
side, and then separates them into two topics again on thelovelace
side. This means that we only have one large subscription going over WiFi -- the combined RGB + depth subscription -- which should alleviate the aforementioned issue.Potential Future Improvements
camera_info
subscription from WiFi, we could add a dummy camera info node to thenano_bridge
packet, potentially drawing from this dummy RealSense node.lovelace
andnano
, but not connecting them.Testing procedure
mock
: launch the demo, verify it works.real
:ros2 topic hz /local/camera/color/image_raw/compressed
. Ensure it makes sense, and log the value here: 5-7Hzros2 topic hz /local/camera/aligned_depth_to_color/image_raw/compressedDepth
. Ensure it makes sense, and log the value here: 5-7Hzros2 topic hz /nano_bridge/data
. Ensure it makes sense, and log the value here: 9-14Hzros2 topic bw /nano_bridge/data
. Ensure it makes sense, and log the value here: 1.1-1.7MB/sperception
screen would keep logging warings because a depth image hadn't been received in several seconds, though RGB images kept getting received. Verify that error doesn't arise.)Explanation of the above rates
On
nano
,/nano_bridge/data
publishes at a rate of ~24 hz, which is perfect given that each image topic publishes at a rate of ~12 hz. However, onlovelace
,/nano_bridge/data
is received at a rate of ~14 hz, which must be due to router bandwidth. As a result, each of the images is received at a rate of ~7 hz.Although unfortunate, that rate can be worked with. Whereas earlier, one of the topics would have 14 hz and the other would have 0, now both have 7, which is a much better outcome.
Before opening a pull request
python3 -m black .
ada_feeding
directory, run:pylint --recursive=y --rcfile=.pylintrc .
.Before Merging
Squash & Merge