Closed aakash26 closed 3 years ago
Hi,
Surely you don't need groundtruth depth maps to run our algorithm, it's against the whole purpose :), just my implementation did not allow it. Now, I added an option to run the online testing of fusionnet without having the groundtruth depths for evaluation. You can pull the changes and use evaluate = False
here.
One crucial requirement for the videos is the metric pose measurements. There must be no scale ambiguity, otherwise the depth planes used for the plane sweep stereo won't match the training behaviour and the system most likely will produce inaccurate results. I have two options which comes to my mind quickly.
Hope these ideas somehow help you.
Hi @ardaduz ,
Thanks for replying so quickly, Yes, I understood that the depth predictions were only needed for testing, but I will pull the latest code for evaluation. For camera poses, I was using ORB-SLAM for online Monocular evaluation but I agree with you. I have a stereo setup and would use that today hopefully and try to evaluate using COLMAP first. Thanks for the input will let you know the results hopefully :)
I am closing the issue for now, please feel free to open again if you want to discuss further.
Hi authors, Thanks for providing the code and all the information. The online testing script is working great on the provided Sample HOLOLENS dataset and on TUM-RGB-SLAM dataset and giving great results. However, now I want to try to run on custom videos taken from a smartphone. I have one question regarding this:- 1) I am using ORB-SLAM to predict camera poses, but it takes around 35-45 mins runtime on the GPU, do you have any advise running a faster algorithm for calculating camera poses any other algorithm ?
Thanks and hoping for a reply Aakash Rajpal