zhyever / PatchFusion

[CVPR 2024] An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation
https://zhyever.github.io/patchfusion/
MIT License
958 stars 64 forks source link
depth-estimation high-resolution

PatchFusion

An End-to-End Tile-Based Framework
for High-Resolution Monocular Metric Depth Estimation

[![Website](assets/badge-website.svg)](https://zhyever.github.io/patchfusion/) [![Paper](https://img.shields.io/badge/arXiv-PDF-b31b1b)](https://arxiv.org/abs/2312.02284) [![Hugging Face Space](https://img.shields.io/badge/🤗%20Hugging%20Face-Space-yellow)](https://huggingface.co/spaces/zhyever/PatchFusion) [![Hugging Face Model](https://img.shields.io/badge/🤗%20Hugging%20Face-Model-yellow)](https://huggingface.co/zhyever/PatchFusion) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) Zhenyu Li, Shariq Farooq Bhat, Peter Wonka.
KAUST

✨ NEWS

Repo Features

Environment setup

Install environment using environment.yml :

Using mamba (fastest):

mamba env create -n patchfusion --file environment.yml
mamba activate patchfusion

Using conda :

conda env create -n patchfusion --file environment.yml
conda activate patchfusion

NOTE:

Before running the code, please first run:

export PYTHONPATH="${PYTHONPATH}:/path/to/the/folder/PatchFusion"
export PYTHONPATH="${PYTHONPATH}:/path/to/the/folder/PatchFusion/external"

Make sure that you have exported the external folder which stores codes from other repos (ZoeDepth, Depth-Anything, etc.)

Pre-Train Model

We provide PatchFusion with various base depth models: ZoeDepth-N, Depth-Anything-vits, Depth-Anything-vitb, and Depth-Anything-vitl. The inference time of PatchFusion is linearly related to the base model's inference time.

from estimator.models.patchfusion import PatchFusion
model_name = 'Zhyever/patchfusion_depth_anything_vitl14'

# valid model name:
# 'Zhyever/patchfusion_depth_anything_vits14', 
# 'Zhyever/patchfusion_depth_anything_vitb14', 
# 'Zhyever/patchfusion_depth_anything_vitl14', 
# 'Zhyever/patchfusion_zoedepth'

DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
model = PatchFusion.from_pretrained(model_name).to(DEVICE).eval()

Without Network Connection Solution

Click here for solutions - Manually download the checkpoint from [here](https://huggingface.co/zhyever/PatchFusion/tree/main). For example, if you want to use depth-anything vitl, you need to download three checkpoints: [coarse_pretrain.pth](https://huggingface.co/zhyever/PatchFusion/blob/main/depthanything_vitl_u4k/coarse_pretrain/checkpoint_24.pth), [fine_pretrain.pth](https://huggingface.co/zhyever/PatchFusion/blob/main/depthanything_vitl_u4k/fine_pretrain/checkpoint_24.pth), and [patchfusion.pth](https://huggingface.co/zhyever/PatchFusion/blob/main/depthanything_vitl_u4k/patchfusion/checkpoint_16.pth). - Save them to the local folder. For example: put them at: `./work_dir/depth-anything/ckps` - Then, set the checkpoint path in the corresponding config files (e.g. `./configs/patchfusion_depthanything/depthanything_vitl_patchfusion_u4k.py` in this case): ``` yaml model.config.pretrain_model=['./work_dir/depth-anything/ckps/coarse_pretrain.pth', './work_dir/depth-anything/ckps/fine_pretrain.pth'] # Note the default path would be: './work_dir/depthanything_vitl_u4k/coarse_pretrain/checkpoint_24.pth', './work_dir/depthanything_vitl_u4k/fine_pretrain/checkpoint_24.pth'. Just look for this item replace it correspondingly. ``` - Lastly, load the model locally: ```python from mmengine.config import Config cfg_path = './configs/patchfusion_depthanything/depthanything_vitl_patchfusion_u4k.py' cfg = Config.fromfile(cfg_path) # load corresponding config for depth-anything vitl. model = build_model(cfg.model) # build the model print(model.load_dict(torch.load(cfg.ckp_path)['model_state_dict']), logger='current') # load checkpoint ``` When building the PatchFusion model, it will load the coarse and fine checkpoints in the `init` function. Because the `patchfusion.pth` only contains the parameters of the fusion network, there will be some warnings here. But it's totally fine. The idea is to save coarse model, fine model, and fusion model separately. - We list the corresponding config path as below. Please make sure looking for the correct one before starting to modify. | Model Name | Config Path | |---|---| | Depth-Anything-vitl | `./configs/patchfusion_depthanything/depthanything_vitl_patchfusion_u4k.py` | | Depth-Anything-vitb | `./configs/patchfusion_depthanything/depthanything_vitb_patchfusion_u4k.py` | | Depth-Anything-vits | `./configs/patchfusion_depthanything/depthanything_vits_patchfusion_u4k.py` | | ZoeDepth-N | `./configs/patchfusion_zoedepth/zoedepth_patchfusion_u4k.py` |

User Inference

Running:

To execute user inference, use the following command:

python run.py ${CONFIG_FILE} --ckp-path <checkpoints> --cai-mode <m1 | m2 | rn> --cfg-option general_dataloader.dataset.rgb_image_dir='<img-directory>' [--save] --work-dir <output-path> --test-type general [--gray-scale] --image-raw-shape [h w] --patch-split-num [h, w]

Arguments Explanation (More details can be found here):

Example Usage:

Below is an example command that demonstrates how to run the inference process:

python ./tools/test.py configs/patchfusion_depthanything/depthanything_general.py --ckp-path Zhyever/patchfusion_depth_anything_vitl14 --cai-mode r32 --cfg-option general_dataloader.dataset.rgb_image_dir='./examples/' --save --work-dir ./work_dir/predictions --test-type general --image-raw-shape 1080 1920 --patch-split-num 2 2

This example performs inference using the depthanything_general.py configuration for Depth-Anything, loads the specified checkpoint patchfusion_depth_anything_vitl14, sets the PatchFusion mode to r32, specifies the input image directory ./examples/, and saves the output to ./work_dir/predictions ./work_dir/predictions. The original dimensions of the input image is 1080x1920 and the input image is divided into 2x2 patches.

Easy Way to Import PatchFusion:

Code snippet You can find this code snippet in `./tools/test_single_forward.py`. ```python import cv2 import torch import numpy as np import torch.nn.functional as F from torchvision import transforms from estimator.models.patchfusion import PatchFusion model_name = 'Zhyever/patchfusion_depth_anything_vitl14' DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' model = PatchFusion.from_pretrained(model_name).to(DEVICE).eval() image_raw_shape = model.tile_cfg['image_raw_shape'] image_resizer = model.resizer image = cv2.imread('./examples/example_1.jpeg') image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) / 255.0 image = transforms.ToTensor()(np.asarray(image)) # raw image image_lr = image_resizer(image.unsqueeze(dim=0)).float().to(DEVICE) image_hr = F.interpolate(image.unsqueeze(dim=0), image_raw_shape, mode='bicubic', align_corners=True).float().to(DEVICE) mode = 'r128' # inference mode process_num = 4 # batch process size depth_prediction, _ = model(mode='infer', cai_mode=mode, process_num=process_num, image_lr=image_lr, image_hr=image_hr) depth_prediction = F.interpolate(depth_prediction, image.shape[-2:])[0, 0].detach().cpu().numpy() # depth shape would be (h, w), similar to the input image ```

More introductions about inference are provided here.

User Training

Please refer to user_training for more details.

Acknowledgement

We would like to thank AK(@_akhaliq) and @hysts from the HuggingFace team for the help.

Citation

If you find our work useful for your research, please consider citing the paper

@article{li2023patchfusion,
    title={PatchFusion: An End-to-End Tile-Based Framework for High-Resolution Monocular Metric Depth Estimation}, 
    author={Zhenyu Li and Shariq Farooq Bhat and Peter Wonka},
    booktitle={CVPR},
    year={2024}
}