Closed Karleno closed 5 years ago
Hi @Karleno
Could someone please share a list of the available ROS topics when running their D435i?
@doronhi managing intel-ros can. That repo also has a launch file that can read librealsense bag file and transmit standard ROS topics. You might find this useful - D435i with RTabMap
Is it correct that the D435i can output both stereo images, as well as a RGB-D image, where the depth is based purely on active light patterns (from the D430 Depth Module)?
Yes, but - you specify Raspberry Pi as your platform, meaning you are planning to use the camera with USB2. In general, we cannot guaranty full camera functionality via USB2, and in case of D435i there is currently a firmware issue reducing camera capabilities to the bare minimum (#3209)
IMHO, it will also be fairly challenging to run RTabMap with Pi's limited compute, but that's beyond the scope of your question.
When connected via USB3, the camera can provide gray-scale stereo pair (including near-infrared wavelength that will see the active projector pattern) and depth and RGB data. Running everything together at high enough resolution and frame-rate is not something every computer will handle, both in terms of bandwidth and stability of the USB sub-system.
By default, depth and RGB come from different viewpoints, and you can align them into RGB-D using this SDK, but it is also a computationally demanding task. This algorithm is well optimized for Intel platforms (including Aeon's Up series) and we have dedicated optimizations for NVidia Jetson series as well, but on other single-board computers it is likely to be very slow.
Another thing, I wanted to clarify what you meant by "depth is based purely on active light patterns"? The depth is not based purely on active light pattern, that's what differentiates our technology from most of the competition and allows us to operate outdoor.
Thank you for your informative response @dorodnic. Apparently we've misunderstood how the depth sensing of the camera works. Initially we thought that the D435 could provide stereo RGB, so thank your for clarification on the fact that there are two IR cameras and one RGB camera in the D435.
We were actually planning on running RTAB-MAP on a separate off-board computer over ROS, but perhaps this is an issue in terms of latency and wireless bandwidth? Naturally, on-board SLAM would be the best alternative, assuming infinite financial resources and available drone payload. The Euclid mentioned below may be within our budget, and capable of on-board SLAM. In either case, an upgrade from the RPi seems preferable to get around the USB 3.0 issues.
Using Aeons's Up series seems like a good alternative for USB 3.0/SLAM purposes, but researching our alternatives we've found another one, called Euclid Development Kit. How would you compare these, considering the processing power is greater on the Euclid, but the camera (ZR300 components) in that kit is of older technology compared to the D435 model. In the Euclid camera for example, the FoV is smaller which will have a negative effect on the SLAM algorithm.
For our thesis, this also means that we will need a new research question. We are suggesting either using two D435(i), front and back views, or comparing a normal RGB stereoscopic camera with the RGB-D Intel RealSense alternative. The Up only has one USB 3.0 port, which means that we'd have to share the bandwidth through a USB-hub. The Euclid on the other hand uses the discontinued ZR300, which we would need to get a copy of for RTAB-MAP to work with multiple cameras.
What is your recommendation?
The topics published depends on the parameters you give to realsense2_camera.
After running :
roslaunch realsense2_camera rs_camera.launch
rostopic list
yield the followings:
/camera/Motion_Module/parameter_descriptions
/camera/Motion_Module/parameter_updates
/camera/RGB_Camera/parameter_descriptions
/camera/RGB_Camera/parameter_updates
/camera/Stereo_Module/parameter_descriptions
/camera/Stereo_Module/parameter_updates
/camera/accel/imu_info
/camera/accel/sample
/camera/color/camera_info
/camera/color/image_raw
/camera/color/image_raw/compressed
/camera/color/image_raw/compressed/parameter_descriptions
/camera/color/image_raw/compressed/parameter_updates
/camera/depth/camera_info
/camera/depth/image_rect_raw
/camera/depth/image_rect_raw/compressed
/camera/depth/image_rect_raw/compressed/parameter_descriptions
/camera/depth/image_rect_raw/compressed/parameter_updates
/camera/extrinsics/depth_to_color
/camera/extrinsics/depth_to_infra1
/camera/extrinsics/depth_to_infra2
/camera/gyro/imu_info
/camera/gyro/sample
/camera/infra1/camera_info
/camera/infra1/image_rect_raw
/camera/infra1/image_rect_raw/compressed
/camera/infra1/image_rect_raw/compressed/parameter_descriptions
/camera/infra1/image_rect_raw/compressed/parameter_updates
/camera/infra2/camera_info
/camera/infra2/image_rect_raw
/camera/infra2/image_rect_raw/compressed
/camera/infra2/image_rect_raw/compressed/parameter_descriptions
/camera/infra2/image_rect_raw/compressed/parameter_updates
/camera/realsense2_camera_manager/bond
/diagnostics
/rosout
/rosout_agg
/tf_static
If you set the parameter unite_imu_method, for example: unite_imu_method:=linear_interpolation
the topics for gyro and accel will be replaced by imu
/camera/imu
/camera/imu_info
if you set align_depth:=true
the following topics will be added:
/camera/aligned_depth_to_color/camera_info
/camera/aligned_depth_to_color/image_raw
/camera/aligned_depth_to_color/image_raw/compressed
/camera/aligned_depth_to_color/image_raw/compressed/parameter_descriptions
/camera/aligned_depth_to_color/image_raw/compressed/parameter_updates
/camera/aligned_depth_to_infra1/camera_info
/camera/aligned_depth_to_infra1/image_raw
/camera/aligned_depth_to_infra1/image_raw/compressed
/camera/aligned_depth_to_infra1/image_raw/compressed/parameter_descriptions
/camera/aligned_depth_to_infra1/image_raw/compressed/parameter_updates
/camera/aligned_depth_to_infra2/camera_info
/camera/aligned_depth_to_infra2/image_raw
/camera/aligned_depth_to_infra2/image_raw/compressed
/camera/aligned_depth_to_infra2/image_raw/compressed/parameter_descriptions
/camera/aligned_depth_to_infra2/image_raw/compressed/parameter_updates
setting filters:=pointcloud
will add the topic:
/camera/depth/color/points
The value proposition of Euclid is all-in-one robotics platform (compute + power + vision + wifi + sensors + ros). IMHO UpBoard + D435i would better suite your needs and cost less. Also, both compute and vision components of the Euclid are starting to show their age.
IMHO multi-camera streaming of high-resolutions and multiple streams is going to be a challenge on any low power compute currently on the market. I would try one of the lower cost boards (Up, NanoPi NEO4, ODROID-XU4) expecting problems, and if that won't work consider more expensive solutions (Intel NUC, NVidia Jetson, Firefly-RK3399). To clarify, I do not endorse any of these, but you should be aware of the options. We only validate multi-camera on Intel NUC (I could share exact models, but generally decent last-gen platforms)
@Karleno Any other questions about this ticket? Looking forward to your update. Thanks!
That repo also has a launch file that can read librealsense bag file and transmit standard ROS topics. @dorodnic which launch file are you referring to? I want to test d435i rosbags from https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md with RTAB-Map. How can I convert the bag? I need the aligned rgb-d data from the sensor.
@Karleno @dorodnic Do you guys have any experience with this?
Hi @dinosshan Bag files recorded in librealsense do not store aligned data, only individual streams.
Issue Description
Hi!
Together with a friend, I am planning on building an autonomous drone for our master's thesis, based on the Intel RealSense D435i, which I have not yet bought. First of all, we have tried to find a complete list of all the available ROS topics. We found the sample data (https://github.com/IntelRealSense/librealsense/blob/master/doc/sample-data.md), but since the ROS-bag is not standard we can not simply use the "rosbag" tool to play it and run "rostopic list" to see them all. Could someone please share a list of the available ROS topics when running their D435i?
Part of our proposed research question is based around a comparison between stereo vs RGB-D input for the RTAB-MAP SLAM algorithm. Is it correct that the D435i can output both stereo images, as well as a RGB-D image, where the depth is based purely on active light patterns (from the D430 Depth Module)?
Thank you!