aim-uofa / AdelaiDepth

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.
Creative Commons Zero v1.0 Universal
1.07k stars 145 forks source link

About intrinsic usage in mix dataset training #4

Closed mzy97 closed 3 years ago

mzy97 commented 3 years ago

In normal loss, you constructed predicted normal map from point cloud (generated by aligned depth). If mix dataset in training, there will be several intrinsics in a batch, how to manage these intrinsics to let it be used in right depth map in training stage? And when mixing web stereo dataset, there is no intrinsic, how to construct point cloud from these data?

thank you

YvanYin commented 3 years ago

1) Manage intrinsics in a batch. -- When I load the each data in a batch, I will load depth/rgb/intrinsics at the same time and save them in a dict. It can make sure that the depth and intrinsics are paired. 2) Point cloud for the web stereo data. -- For the webstereo data, we will not enforce any geometry loss on them. Therefore, we don't need to reconstruct the point cloud on them.