aim-uofa / AdelaiDepth

This repo contains the projects: 'Virtual Normal', 'DiverseDepth', and '3D Scene Shape'. They aim to solve the monocular depth estimation, 3D scene reconstruction from single image problems.
Creative Commons Zero v1.0 Universal
1.06k stars 144 forks source link

how to generate depth data like you provide? #70

Closed yangtf-210310 closed 7 months ago

yangtf-210310 commented 1 year ago

Hi,I want training model use myself datasets, how to generate depth data like you provide?

guangkaixu commented 1 year ago

@yangtf-210310 Hi, sorry for the late response. If you would like to train the depth model on your own datasets, please generate an annotation.json file, which contains a list, and each component in the list is a dict, the dict contains relative path of rgb_path and depth_path. For example, the test_annotations.json of diversedepth_annotations in here is shown as follows:

[{"rgb_path": "taskonomy/rgbs/akiak/point_146_view_1_domain_rgb.png", "depth_path": "taskonomy/depths/akiak/point_146_view_1_domain_depth_zbuffer.png"}, {"rgb_path": "taskonomy/rgbs/akiak/point_80_view_5_domain_rgb.png", "depth_path": "taskonomy/depths/akiak/point_80_view_5_domain_depth_zbuffer.png"}, ...]

Then, change the quality_flg of each dataset in here as follows: 1) If the quality of depth sensor is reliable to supervise virtual normal, set it to 3; 2) If not so reliable, set it to 2; 3) If obtained from web images or unreliable pseudo labels, set it to 1, which only supervises the relative depth relation.