We aim to solve the problem of estimating depth information from single images.
We tested our networks on both Kinect style data and correlating RGB image and Lidar data.
[NYU V2 Depth dataset] https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html
[Dataset Provided by Matt. From the course EECS599 Autonomous Driving Cars] http://umich.edu/~fcav/rob599_dataset_deploy.zip
See the file rawDataLoader.py and rgblDataLoader.py to figure out the proper directory to place the datasets.
Run training*.py to train the model.
For model with pre-train steps, run training1.py and training2.py sequentially.
Run show_loss.py to see the loss over training set and test set along the training process:
To test the model, run test*.py. This will generate examples of predictions from the test set and write the images into /result_images directory.
Run evaluate.py to calculate the Absolute Relative Error, Root Mean Square Error etc. from the generated images:
Origin RGB image:
Origin raw Depth file:
Predicted Depth: