Closed ArghyaChatterjee closed 3 years ago
Hey @ArghyaChatterjee,
Thanks for following our work!
The input of the OverlapNet are range images generated from two LiDAR scans and the output is the estimated overlap and yaw angle between them, which are actually two floats. You could write a ROS publisher to broadcast these two predictions via ROS message.
Thanks for the reply. In overlapnet, the input are 2 range images generated from lidar scans in KITTI (.bin) format in order to compare for possible loop candidates. In ROS, the scans that I am publishing realtime in a topic is a sensor_msg/pointcloud2 type. How to interphase between these 2 different type of messages realtime (online)?
If the input is the point clouds, one needs to first convert them into range and normal images.
For a fast preprocessing of point clouds (online generating range and normal images), you could use our c++ implementation provided in our localization repo instead of the corresponding python functions which are slow.
There is an even faster way for preprocessing point clouds provided in our SLAM pipeline SuMa++ which uses OpenGL rendering.
I hope this helps.
Hi @Chen-Xieyuanli , so I have started working in this direction. It looks like I have to convert sensor_msgs/pointcloud2 messages to ,pcd format and from .pcd to .bin format using this repo. Then use your localization repo to generate depth and normal images from corresponding scans and use demo2.py file from this repo to predict overlap and yaw angle between those 2 pair of images using inference file. Am I right ? Is there any alternative to what I have said? Also is there any work already implemented in this direction that you know about ?
Hi @Chen-Xieyuanli , so I have started working in this direction. It looks like I have to convert sensor_msgs/pointcloud2 messages to ,pcd format and from .pcd to .bin format using this repo. Then use your localization repo to generate depth and normal images from corresponding scans and use demo2.py file from this repo to predict overlap and yaw angle between those 2 pair of images using inference file. Am I right ? Is there any alternative to what I have said? Also is there any work already implemented in this direction that you know about ?
Sorry for replying to you so late. I forgot this issue (:man_facepalming: ). Your solution sounds correct. I hope you already found the solution. Another way is to write the ROS publishers and subscribers between different modules.
Since there is no plan for us to create a ROS wrapper, I would like to close this issue. Please feel free to modify the code and I hope you have fun with that.
Hello, I was wondering if there is a way to integrate this work with ROS or not. Say, we are providing the pointcloud messages (scans) in a topic and each message is splitted into 3/4 set of data (normal, range, intensity and semantic) as input (for testing) to the model and the model predicts loop closure candidates as output in a ROS message format. Do you have any idea about this or do you have any knowledge that someone has already done something regarding this matter ?