Closed TwiceMao closed 1 year ago
Yes you can run it with RGBD as long as you have a visual odometry software working for your data.
For example, use this launch file, remove the camera, lidar, and IMU nodes and modify the odometry node (currently lidar-based) for the RGB-D visual odometry solution of your choice.
@lajoiepy To supplement the above question: the RGB image of my RGBD dataset is in .jpg format, the Depth image is in .png format, and the pose is in .txt format. Will this still work with Swarm-SLAM? If not, how can I convert the data format to meet the requirements? My concern is that only the .bag dataset format will work with Swarm-SLAM. Thanks!
Hi! Indeed, you need to convert your image sequence to a ROS 2 bag. Here is a tutorial to create a bag from your data https://docs.ros.org/en/foxy/Tutorials/Advanced/Recording-A-Bag-From-Your-Own-Node-CPP.html . There is also a python interface for this.
@lajoiepy If I modify your VIO module to a VO module (such as ORB2-SLAM), and the input of the system is an RGBD data set, can this experiment be said to be the experimental result of your paper? Because you mentioned "Our system supports inertia, lidar, stereo, and RGB-D sensing," in your paper, but doing the above experiment requires modifying your code, so I don't know. Because I now need a traditional C-SLAM comparison experiment with RGBD dataset (the worst is RGB dataset) as input.
The odometry source is external to our system. We provide rtabmap launch files as an example, but you can use the solution of your choice (e.g. ORB-SLAM) for odometry without modifying our code base. You only need to adapt the launch and config files accordingly.
Thank you for such a great job! Is it ok to use my own RGBD dataset? I have a mutli-room RGBD dataset, but I don't have IMU, LiDAR, Stereo and other data, can your code run on this dataset? Thank you for your reply!