Open Yuliang-Zou opened 5 years ago
Sure, I will add a demo for online tracking and mapping later this week. The demo code is already doing simultaneous pose estimation and mapping just over a small video clip. But I can make an additional demo showing how the code can be used for a full video sequence.
Hi, I just added a new demo showing how DeepV2D can be used as a SLAM system on NYU.
Cool~ Thanks!
Hi @zachteed , I modified your SLAM code a bit for KITTI sequence, but it seemed that the SLAM system cannot recover absolute scale of the translation part (I need to do global scale alignment for evaluation). I wonder if the SLAM system can predict absolute scale camera pose or not, or did you have some sort of scaling factor during training?
For KITTI sequence input, I scaled and crop the images following the data preparation code. And I also used the KITTI config file and the KITTI pre-trained model.
Thank you and look forward to your response.
Hi, you should be able to recover the absolute scale of translation on the KITTI dataset. You may need to scale the outputs by 10, because the output units on KITTI are .1 meters.
Thanks!
Hi, I wonder if I also need to scale the output if I am using the nyu pre-trained models. I am testing it on some sequences from TUM RGB-D dataset, but seems that the scale is not correct.
Hi, thanks for the great work. I wonder if you can provide a demo code to perform tracking (camera pose estimation) and mapping (depth estimation) simultaneously.