HKUST-Aerial-Robotics / MVDepthNet

This repository provides PyTorch implementation for 3DV 2018 paper "MVDepthNet: real-time multiview depth estimation neural network"
GNU General Public License v3.0
309 stars 72 forks source link

what dataset and parameters do you use for training? #18

Closed lawsonsli closed 4 years ago

lawsonsli commented 4 years ago

I used TUM rgbd dataset for training. After about 100 epoch, this is an image pair example in training dataset. pred_result It is not that good as you mentioned. So, what dataset do you use for training? TUM (including dynamic objects), Nyu v2 or some else? And the training parameters? Thanks for your great job. It helps a lot.

WANG-KX commented 4 years ago

I think the details are included in the paper, you can refer to it. https://arxiv.org/abs/1807.08563.

lawsonsli commented 4 years ago

Thanks. I'll try that. I'd appreciate it if can you share the script of processing SUN3D dataset.

WANG-KX commented 4 years ago

Sorry, I searched carefully and I think I have lost the script of that path. It has been a long time since the project. In my current opinion, I recommend you directly use the dataset provided by DeMoN to train your network. It will ease your effort if you want to publish your work.

lawsonsli commented 4 years ago

Thanks a lot! Sincerely.

lawsonsli commented 4 years ago

I have another question. When doing image standardization, I don't know the mean and std of the whole DeMon dataset. I don't know how to deal with this. can I use mean and std of current image instead?

WANG-KX commented 4 years ago

You can use the mean and std from ImageNet dataset. I think it's mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. From https://pytorch.org/docs/stable/torchvision/models.html