luigifreda / pyslam

pySLAM contains a Visual Odometry (VO) pipeline in Python for monocular, stereo and RGBD cameras. It supports many modern local features based on Deep Learning.
GNU General Public License v3.0
1.86k stars 335 forks source link

Issue with loftr in version 2.1 - TypeError: 'NoneType' object is not iterable #112

Closed BlackdogandGrayDog closed 2 months ago

BlackdogandGrayDog commented 2 months ago

Hi Luigi,

First of all, thank you for your excellent updated version 2.1! I'm excited to use the newest matcher based on loftr for main_slam.py.

However, I encountered an issue when trying to use it. The error message I received is:

kps_data = np.array([ [x.pt[0], x.pt[1], x.octave, x.size, x.angle] for x in self.kps ], dtype=np.float32) TypeError: 'NoneType' object is not iterable

Could you please provide any guidance on how to properly use loftr with the latest version? Any help would be highly appreciated.

Thank you!

Best regards, Eric

luigifreda commented 2 months ago

Hi Eric, thanks for your feedback.

Unfortunately, at present, man_slam.py does not support LOFTR. In particular, LOFTR is not able to extract keypoints and descriptors on a single provided image. It works directly on two images (img1, img2) and produces a pair of corresponding keypoint vectors (kps1, kps2). If we feed LOFTR with video images, the extracted keypoints are different on each image. That is, given:

we have that the keypoint kps2a[i], extracted on img2 the first time, does not necessarily correspond to kps2b[i] or to any other kps2b[j] extracted the second time on img2. For these reasons, at present, we cannot use such a "pure" matcher with classic SLAM. Mapping and localization processes need more than two observations for each triangulated 3D point along different frames to obtain persistent map points and properly constrain camera pose optimizations in the Sim(3) manifold. Work in progress.

However, you can test LOFTR with main_vo.py and main_feature_matching.py.