Project Page • Arxiv • Demo • FAQ • Citation
https://github.com/OpenTexture/Paint3D/assets/18525299/9aef7eeb-a783-482c-87d5-78055da3bfc0
Paint3D is a novel coarse-to-fine generative framework that is capable of producing high-resolution, lighting-less, and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs.
The code is tested on Centos 7 with PyTorch 1.12.1 CUDA 11.6 installed. Please follow the following steps to setup environment.
# install python environment
conda env create -f environment.yaml
# install kaolin
pip install kaolin==0.13.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/{TORCH_VER}_{CUDA_VER}.html
For UV-position controlnet, you can find it here.
To use the other ControlNet models, please download it from the hugging face page, and modify the controlnet path in the config file.
Then, you can generate coarse texture via:
python pipeline_paint3d_stage1.py \
--sd_config controlnet/config/depth_based_inpaint_template.yaml \
--render_config paint3d/config/train_config_paint3d.py \
--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \
--outdir outputs/stage1
and the refined texture via:
python pipeline_paint3d_stage2.py \
--sd_config controlnet/config/UV_based_inpaint_template.yaml \
--render_config paint3d/config/train_config_paint3d.py \
--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \
--texture_path outputs/stage1/res-0/albedo.png \
--outdir outputs/stage2
Optionally, you can also generate texture results with UV position controlnet only, for example:
python pipeline_UV_only.py \
--sd_config controlnet/config/UV_gen_template.yaml \
--render_config paint3d/config/train_config_paint3d.py \
--mesh_path demo/objs/teapot/scene.obj \
--outdir outputs/test_teapot
With a image condition, you can generate coarse texture via:
python pipeline_paint3d_stage1.py \
--sd_config controlnet/config/depth_based_inpaint_template.yaml \
--render_config paint3d/config/train_config_paint3d.py \
--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \
--prompt " " \
--ip_adapter_image_path demo/objs/Suzanne_monkey/img_prompt.png \
--outdir outputs/img_stage1
and the refined texture via:
python pipeline_paint3d_stage2.py \
--sd_config controlnet/config/UV_based_inpaint_template.yaml \
--render_config paint3d/config/train_config_paint3d.py \
--mesh_path demo/objs/Suzanne_monkey/Suzanne_monkey.obj \
--texture_path outputs/img_stage1/res-0/albedo.png \
--prompt " " \
--ip_adapter_image_path demo/objs/Suzanne_monkey/img_prompt.png \
--outdir outputs/img_stage2
For checkpoints in Civitai with only a .safetensor file, you can use the following script to convert and use them.
python tools/convert_original_stable_diffusion_to_diffusers.py \
--checkpoint_path YOUR_LOCAL.safetensors \
--dump_path model_cvt/ \
--from_safetensors
@inproceedings{zeng2024paint3d,
title={Paint3d: Paint anything 3d with lighting-less texture diffusion models},
author={Zeng, Xianfang and Chen, Xin and Qi, Zhongqi and Liu, Wen and Zhao, Zibo and Wang, Zhibin and Fu, Bin and Liu, Yong and Yu, Gang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4252--4262},
year={2024}
}
Thanks to TEXTure, Text2Tex, Stable Diffusion and ControlNet, our code is partially borrowing from them. Our approach is inspired by MotionGPT, Michelangelo and DreamFusion.
This code is distributed under an Apache 2.0 LICENSE.
Note that our code depends on other libraries, including PyTorch3D and PyTorch Lightning, and uses datasets which each have their own respective licenses that must also be followed.