This repository is the first part of the project and Pytorch implementation of Depth Map Prediction from a Single Image using a Multi-Scale Deep Network by David Eigen, Christian Puhrsch and Rob Fergus. Paper Link
We used NYU Depth Dataset V2 as our dataset. We used Labeled dataset (~2.8 GB) of NYU Depth Dataset which provides 1449 densely labeled pairs of aligned RGB and depth images. We divided labeled dataset into three parts (Training - 1024, Validation - 224, Testing - 201) for our project. NYU Dataset also provides Raw dataset (~428 GB) on which we couldn't train due to machine capacity.