Yuming Gu1,2
·
You Xie2
·
Hongyi Xu2
·
Guoxian Song2
·
Yichun Shi2
·
Di Chang1,2
·
Jing Yang1
·
Linjie Luo2
·
1University of Southern California 2ByteDance Inc.
[2024.03.18] Release code.
[2024.02.26] Congratulations to our team! Our paper has been accepted to CVPR2024 Highlight, see you in Seattle!
[2023.12.28] Release Diffportrait3D paper and project page.
For pretrained checkpoint please download from google drive from here.
Place the pretrained weights as following:
DiffPortrait3D
|----checkpoints
|----model_state-540000-001.th
The environment from my machine is Python 3.8.5
CUDA 11.7
. It's possible to have other compatible version.
conda env create -f diffportrait3D.yml
conda activate diffportrait3D
We test our code on NVIDIA V100, NVIDIA A100, NVIDIA A6000.
bash script/CVPR_Inference/inference_sample.sh
Due to company IP Policy, we cannot release the 3D aware noise model. In this case, we highly encourage you to acheive the 3D Aware Noise from other pretrained 3D GAN method. Models like GOAE, Triplanenet could also be a very good 3D-aware noise initial. Please also refer to EG3D to generate aligned camera condition.
If you find our work useful, please consider citing:
@inproceedings{gu2024diffportrait3d,
title={DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis},
author={Yuming Gu and Hongyi Xu and You Xie and Guoxian Song and Yichun Shi and Di Chang and Jing Yang and Lingjie Luo},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
This research reference implementation is treated as a one-time code drop. Therefore, we may be slow in accepting external code contributions through pull requests.
Our code follows several excellent repositories. We appreciate them for making their codes available to the public.