merged_depth
runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute depth estimation.
First, download the pretrained models using the download_models
script.
Next, run the infer
script - this will run on all images in test/input
and save the results to test/output
.
python3 -m pip install -r requirements.txt
python3 -m merged_depth.utils.download_models
python3 -m merged_depth.infer
If you're using Anaconda3, the following has been tested to work (in Windows):
conda create --name merged_depth python=3.6
conda activate merged_depth
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
python3 -m pip install -r requirements.txt
python3 -m merged_depth.utils.download_models
python3 -m merged_depth.infer
The results include (1) a _depth.npy
file that you can load (see load_and_display_depth.py
), (2) a _stacked.png
file that shows the original and colorized depth images.
To run the predictor on a single input, use infer_single.py
python3 -m merged_depth.infer_single ~/foo/bar/test.png
The output depth is absolute depth in meters. The colorizer range is [0, 20]
.