Closed jay25208 closed 9 months ago
Hi, the models of bts are also trained on tens of thousands of images instead of 795 images for the NYU-Depth-v2 dataset. The training set of ours is same to previous leading methods, such as NeWCRFs.
I follow the guidence of bts.
$ cd ~/workspace/bts/utils
### Get official NYU Depth V2 split file
$ wget http://horatio.cs.nyu.edu/mit/silberman/nyu_depth_v2/nyu_depth_v2_labeled.mat
### Convert mat file to image files
$ python extract_official_train_test_set_from_mat.py nyu_depth_v2_labeled.mat splits.mat ../../dataset/nyu_depth_v2/official_splits/
I got 795 images and 654 images, which are from 1449 densely labeled pairs of aligned RGB and depth images. Can you tell me how to get tens of thousands of images of NYUv2. Thank you!
Please following here https://github.com/cleinc/bts/tree/master/pytorch.
Please following here https://github.com/cleinc/bts/tree/master/pytorch.
I
Please following here https://github.com/cleinc/bts/tree/master/pytorch.
Oh, I see, thank you!
Thank you for your great work! I trained on 795 pictures as bts, which are from 1449 densely labeled pairs of aligned RGB and depth images. While your training set is 36253 pictures. Are these pictures are form raw_parts(407,024 new unlabeled frames)?
I trained your method, the RMS is 0.3836, while your result is 0.3133. Is different NYUv2 training set leads to different results? I am not familiar in this field. All the compared methods use 36253 pictures for training?
Thank you for your reply!