Closed amine-chaabouni closed 1 year ago
Hi! You are right, we currently only use depth for local descriptors, which is essential for 3D registration. You could use depth for global descriptors as well, but the two Visual Place Recognition techniques we currently support, CosPlace (https://github.com/gmberton/CosPlace) and NetVLAD (https://github.com/Nanne/pytorch-NetVlad), require only RGB images, so we don't include depth. Unfortunately, I don't know much about the RGB-D place recognition literature, but there is, most probably, some very interesting techniques that could be included in Swarm-SLAM. Let me know if you find a suitable one, and I encourage you to submit a pull request! :smiley:
Hello, me again with some questions as I don't understand some implementation details.
When treating RGBD or StereoCamera, the keyframes are sent as KeyframeRGB message with just one rgb image generated in this line https://github.com/lajoiepy/cslam/blob/fd62c32e44c736c79b991c7d2d595766956313e9/src/front_end/rgbd_handler.cpp#L559 and we're not using the depth information anymore (losing it in the global description). Is there a particular reason why you're not using the depth anymore?
I also saw that you are setting the descriptors here https://github.com/lajoiepy/cslam/blob/fd62c32e44c736c79b991c7d2d595766956313e9/src/front_end/rgbd_handler.cpp#L308 using rtabmap and that uses the depth. That is used for the local descriptors registration, so the depth information is not lost. But it is lost in the global descriptors. Is that what done on purpose?
If I wanted to still use the depth information, can you recommend a description tool?