Closed Ariel-JUAN closed 6 years ago
The pre-processed dataset simply registers the depth frame with the rgb frame, and there is no augmentation. The data augmentation is done on the fly during training. See below: https://github.com/fangchangma/sparse-to-dense.pytorch/blob/ddba1cd821861b29ac4702d141cb358b1c524e77/nyu_dataloader.py#L51
There are 47584 frames in the training set.
Thanks
Hi, @fangchangma Thanks for sharing the code. About the nyu dataset, have the depth images been filled and which method was used, cross-bilateral filter or Colorization? Thank you.
Edit: I checked the paper and it says cross-bilateral filter. No need to respond. Thanks.
@icemiliang
About the nyu dataset, have the depth images been filled
No, there is no need for pre-processing of depth.
@fangchangma It says in the paper section IV-A that the ground truth depth images were in-painted with a cross-bilateral filter after projected to the RGB images. This is what I meant. Please let me know if I understand it correctly.
@icemiliang @fangchangma @Ariel-JUAN @timethy @abdo-eldesokey Hi, How can I down the dataset quickly? It's so low speed when I use wget http://datasets.lids.mit.edu/sparse-to-dense/data/kitti.tar.gz.
Hi, thanks for sharing the great work. I have a question. Is the preprocessed NYU Depth V2 augmented following your paper? And by the way, how many images in the dataset? Thanks.