KeyuWu-CS / MonoHair

Code of MonoHair: High-Fidelity Hair Modeling from a Monocular Video
Other
100 stars 4 forks source link

MonoHair:High-Fidelity Hair Modeling from a Monocular Video ( CVPR 2024 Oral ) [Projectpage]

This repository is the official code for MonoHair. Give a monocular video, MonoHair reconstruct a high-fidelity 3D strand model.

In this repository we also give some examples for reconstructing 3D hair from a monocular video or multi-view images.

Getting Started

Clone the repository and install requirements:

git clone https://github.com/KeyuWu-CS/MonoHair.git --recursive
cd MonoHair
conda create -n MonoHair python==3.10.12
conda activate MonoHair
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

Dependencies and submodules

Install Pytorch, Pytorch3d and tiny-cuda-nn. We have tested on Ubuntu 22.04.4, gcc==9.5, python==3.10.12 pytorch==1.11.0, pytorch3d==0.7.2 with CUDA 11.3 on RTX 3090Ti. You can install any version that is compatible with these dependencies. We know torch==1.3.0 have some bug when employing MODNet.

# if have problem when install pytorch 3d, you can try to install fvcore: pip install  fvcore==0.1.5.post20220512
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py310_cu113_pyt1110/download.html

Initialize submodules of Instant-NGP, MODNet, CDGNet, DELTA and face-parsing.

git submodule update --init --recursive
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

If you have problem when install tiny-cuda-nn or pytorch3d, please refer to their repositories.

Compile Instant-NGP and move our modified run.py to instant-ngp/scripts.

cp run.py submodules/instant-ngp/scripts/
cd submodules/instant-ngp
cmake . -B build -DCMAKE_BUILD_TYPE=RelWithDebInfo
cmake --build build --config RelWithDebInfo -j
cd ../..

if you have problem with Instant-NGP compile. Please refer their instruction

Download assets

Download pretrained model for MODNet, CDGNet and face-parsing.

# Download some pretrained model and data for avatar optimization.  
# You should download pretrained model of CDGNet in thier repository, their are 
# two kind of "LIP_epoch_149.pth", please download the one with a size of 300MB.
bash fetch_pretrained_model.sh
bash fetch_data.sh    #this will cost long time.

Download examples

Download our example datas One Driven. For obtaining a certain results, we have run colmap and save the pretrained instant-ngp weight. Then you need to run the follow four steps to get the results. We also provide full results (include intermediate results) in full folder. You can use it to check the results of each step.
Tips: Since the wig use the same unreal human head, we don't use flame (smplx) model as template and don't run multi-view bust fitting.

3D Hair Reconstruction

# Prepare data: instant-ngp intialization, segmentation, gabor filter etc. You skip this step if use our provided data.
python prepare_data.py --yaml=configs/reconstruct/big_wavy1 

# Hair exterior optimization
python PMVO.py --yaml=configs/reconstruct/big_wavy1

# Hair interior inference
python infer_inner.py --yaml=configs/reconstruct/big_wavy1

# Strands generation
python HairGrow.py --yaml=configs/reconstruct/big_wavy1

Visualization

Download our released program at One Driven to visualize the results.

# First copy the output/10-16/full/connected_strands.hair to ../Ours/Voxel_hair
cp data/case_name/output/10-16/full/connected_strands.hair data/case_name/ours/Voxel_hair

# unzip VoxelHair_demo_v3.zip and click VoxelHair_v1.exe. 
# Then click "Load Strands" to visualize the results. You also can use Blender to achieve realistic rendering.

Test your own data

In our given examples, we ignored the steps of running colmap and training instant ngp. So if you want to test your own captured videos. Please refer to the following steps.

Citation

@inproceedings{wu2024monohair,
  title={MonoHair: High-Fidelity Hair Modeling from a Monocular Video},
  author={Wu, Keyu and Yang, Lingchen and Kuang, Zhiyi and Feng, Yao and Han, Xutao and Shen, Yuefan and Fu, Hongbo and Zhou, Kun and Zheng, Youyi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={24164--24173},
  year={2024}
}

Acknowledgments

Here are some great resources we benefit from:

TO DO List