Official implementation of Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image.
Kailu Wu, Fangfu Liu, Zhihan Cai, Runjie Yan, Hanyang Wang, Yating Hu, Yueqi Duan, Kaisheng Ma
If the Gradio Demo is overcrowded or fails to produce stable results, you can use the Online Demo aiuni.ai, which is free to try (get the registration invitation code Join Discord: https://discord.gg/aiuni). However, the Online Demo is slightly different from the Gradio Demo, in that the inference speed is slower, but the generation is much more stable.
High-fidelity and diverse textured meshes generated by Unique3D from single-view wild images in 30 seconds.
The repo is still being under construction, thanks for your patience.
Adapted for Ubuntu 22.04.4 LTS and CUDA 12.1.
conda create -n unique3d python=3.11
conda activate unique3d
pip install ninja
pip install diffusers==0.27.2
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu121/torch2.3.1/index.html
pip install -r requirements.txt
oak-barry provide another setup script for torch210+cu121 at here.
jtydhr88
for the windows installation method! See issues/15.According to issues/15, implemented a bat script to run the commands, so you can:
conda create -n unique3d-py311 python=3.11
conda activate unique3d-py311
More details prefer to issues/15.
Download the weights from huggingface spaces or Tsinghua Cloud Drive, and extract it to ckpt/*
.
Unique3D
├──ckpt
├── controlnet-tile/
├── image2normal/
├── img2mvimg/
├── realesrgan-x4.onnx
└── v1-inference.yaml
Run the interactive inference locally.
python app/gradio_local.py --port 7860
Thanks for the ComfyUI-Unique3D implementation from jtydhr88!
Important: Because the mesh is normalized by the longest edge of xyz during training, it is desirable that the input image needs to contain the longest edge of the object during inference, or else you may get erroneously squashed results.
We have intensively borrowed code from the following repositories. Many thanks to the authors for sharing their code.
Our mission is to create a 4D generative model with 3D concepts. This is just our first step, and the road ahead is still long, but we are confident. We warmly invite you to join the discussion and explore potential collaborations in any capacity. If you're interested in connecting or partnering with us, please don't hesitate to reach out via email (wkl22@mails.tsinghua.edu.cn).
If you found Unique3D helpful, please cite our report:
@misc{wu2024unique3d,
title={Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image},
author={Kailu Wu and Fangfu Liu and Zhihan Cai and Runjie Yan and Hanyang Wang and Yating Hu and Yueqi Duan and Kaisheng Ma},
year={2024},
eprint={2405.20343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}