fangchangma / sparse-to-dense.pytorch

ICRA 2018 "Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image" (PyTorch Implementation)
445 stars 101 forks source link

about the preprocessed NYU Depth V2 #2

Closed Ariel-JUAN closed 6 years ago

Ariel-JUAN commented 6 years ago

Hi, thanks for sharing the great work. I have a question. Is the preprocessed NYU Depth V2 augmented following your paper? And by the way, how many images in the dataset? Thanks.

fangchangma commented 6 years ago

The pre-processed dataset simply registers the depth frame with the rgb frame, and there is no augmentation. The data augmentation is done on the fly during training. See below: https://github.com/fangchangma/sparse-to-dense.pytorch/blob/ddba1cd821861b29ac4702d141cb358b1c524e77/nyu_dataloader.py#L51

There are 47584 frames in the training set.

Ariel-JUAN commented 6 years ago

Thanks

icemiliang commented 5 years ago

Hi, @fangchangma Thanks for sharing the code. About the nyu dataset, have the depth images been filled and which method was used, cross-bilateral filter or Colorization? Thank you.

Edit: I checked the paper and it says cross-bilateral filter. No need to respond. Thanks.

fangchangma commented 5 years ago

@icemiliang

About the nyu dataset, have the depth images been filled

No, there is no need for pre-processing of depth.

icemiliang commented 5 years ago

@fangchangma It says in the paper section IV-A that the ground truth depth images were in-painted with a cross-bilateral filter after projected to the RGB images. This is what I meant. Please let me know if I understand it correctly.

whubaichuan commented 4 years ago

@icemiliang @fangchangma @Ariel-JUAN @timethy @abdo-eldesokey Hi, How can I down the dataset quickly? It's so low speed when I use wget http://datasets.lids.mit.edu/sparse-to-dense/data/kitti.tar.gz.