This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.
Creative Commons Zero v1.0 Universal
1.06k
stars
144
forks
source link
Reconstructing the 3D point cloud when the intrinsic is known #82
Your model predicts the shift for the estimated depth and a scale factor for the camera focal length after monocular depth estimation. In my project, my cameras are already calibrated and I already have the intrinsic parameters. What is the way to get the accurate point cloud with your model in my case? Should I just ignore the predicted scaling factor for the focal length?
Hi,
Thanks for the great work!
Your model predicts the shift for the estimated depth and a scale factor for the camera focal length after monocular depth estimation. In my project, my cameras are already calibrated and I already have the intrinsic parameters. What is the way to get the accurate point cloud with your model in my case? Should I just ignore the predicted scaling factor for the focal length?