Closed Jbwasse2 closed 3 years ago
Got orbslam working
If we want to look into other methods, we should look at this chart.
It may be worth it to get IMU data before collecting data. Also getting Nolan's ROS2 stuff may be needed. It should be noted that RTAB-Map does not support monocular.
Since the data is collected, I just need to get the labels from orbslam.
Orbslam is giving some awful labels for pose as seen here This is for the case where the robot is turning the following corner
Another option not shown in the table is https://google-cartographer.readthedocs.io/en/latest/ Cartographer looks like a solid choice. But it requires LIDAR.
Ran orbslam2 on gibson dataset, results also look bad.
ViNG did something I did in my experiments where the labels come from label difference. IE if frame1 is frame 10 and frame2 is frame 40, the distance is 30. I think this may be the way to go for now while waiting for Ruben to look into depth.
ViNG method gave like 96% testing accuracy on indoor dataset.
ViNG is implemented in ROS code in #26
This time its built off of real world data, but I can also use the one in simulation for sim2real gap evaluation and results. TODOs