wutong16 / HyperDreamer

(Siggraph Asia 2023) Official code of "HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image"
https://ys-imtech.github.io/HyperDreamer/
Apache License 2.0
207 stars 8 forks source link

HyperDreamer (SIGGRAPH Asia 2023)

Project page | Paper

Tong Wu, Zhibing Li, Shuai Yang, Pan Zhang, Xingang Pan, Jiaqi Wang, Dahua Lin, Ziwei Liu

Official implementation of HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image

Installation

Install Dependencies:

# install kaolin
pip install kaolin==0.14.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.1_cu113.html

Download pretrained models

1) You can download control_v11p_sd15_normalbae.pth from the HuggingFace Model Page, and put it under pretrained/controlnet/....

2) You need to download Stable Diffusion 1.5 model "v1-5-pruned.ckpt" and put it under pretrained/controlnet/....

Quickstart

Preprocess the input image to move background and obtain its depth, normal and caption.

python preprocess_image.py /path/to/image.png

We adopt a two-stage training pipeline. You can run it by

image_path='data/strawberry_rgba.png'
nerf_workspace='exp/strawberry_s1'
dmtet_workspace='exp/strawberry_s2'

# Stage 1: NeRF
bash run_nerf.sh ${image_path} ${nerf_workspace}

# Stage 2 DMTet
bash run_dmtet.sh ${image_path} ${nerf_workspace} ${dmtet_workspace}

[optional] We also support importiing pre-defined material masks in the reference view. You can use Semantic-SAM or Materialistic to obtain more accurate masks.

bash run_dmtet.sh ${image_path} ${nerf_workspace} ${dmtet_workspace} --material_masks material_masks/xxx.npy

To relight

bash run_dmtet.sh ${image_path} ${nerf_workspace} ${dmtet_workspace} --test --relight_sg envmaps/lgtSGs_studio.npy

To editing

python editing/scripts/run_editing.py --config_path=editing/configs/sculpture.yaml

Gradio Demo (Editing)

python editing/app_edit.py

TODO

Acknowledgement

This code is built on the open-source projects stable-dreamfusion, Zero123, derender3d, SAM and PASD.

Thanks to the maintainers of these projects for their contribution to the community!

Citation

If you find HyperDreamer helpful for your research, please cite:

@InProceedings{wu2023hyperdreamer,
  author = {Tong Wu and Zhibing Li and Shuai Yang and Pan Zhang and Xingang Pan and Jiaqi Wang and Dahua Lin and Ziwei Liu},
  title = {HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a Single Image},
  journal={ACM SIGGRAPH Asia 2023 Conference Proceedings},
  year={2023}
}