wyysf-98 / CraftsMan

CraftsMan: High-fidelity Mesh Generation with 3D Native Diffusion and Interactive Geometry Refiner
https://craftsman3d.github.io/
430 stars 22 forks source link
3d-diffusion 3d-diffusion-models 3d-generation generative-model

中文版

CraftsMan3D: High-fidelity Mesh Generation
with 3D Native Generation and Interactive Geometry Refiner

Weiyu Li*1,2, Jiarui Liu1,2, Hongyu Yan1, Rui Chen1,2, Yixun Liang2,3, Xuelin Chen4, Ping Tan1,2, Xiaoxiao Long1,2

1HKUST, 2LightIllusions, 3HKUST(GZ), 4Tencent AI Lab

(w/o texture)(w/ texture)

Usage

from craftsman import CraftsManPipeline
import torch

# load from local ckpt
# pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman-v1-5", device="cuda:0", torch_dtype=torch.float32) 

# load from huggingface model hub
pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman-v1-5", device="cuda:0", torch_dtype=torch.float32)

# inference
mesh = pipeline("https://pub-f9073a756ec645d692ce3d171c2e1232.r2.dev/data/werewolf.png").meshes[0]
mesh.export("werewolf.obj")

The results should be like this:

TL; DR: CraftsMan3D (aka 匠心) is a two-stage text/image to 3D mesh generation model. By mimicking the modeling workflow of artist/craftsman, we propose to generate a coarse mesh (5s) with smooth geometry using 3D diffusion model and then refine it (20s) using enhanced multi-view normal maps generated by 2D normal diffusion, which is also can be in a interactive manner like Zbrush.

Important: the released ckpt are mainly trained on character, so it would perform better in this category and we plan to release more advanced pretrained models in the future.

✨ Overview

This repo contains source code (training / inference) of 3D diffusion model, pretrained weights and gradio demo code of our 3D mesh generation project, you can find more visualizations on our project page and try our demo. If you have high-quality 3D data or some other ideas, we very much welcome any form of cooperation.

Full abstract here We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner. Despite the significant advancements in 3D generation, existing methods still struggle with lengthy optimization processes, irregular mesh topologies, noisy surfaces, and difficulties in accommodating user edits, consequently impeding their widespread adoption and implentation in 3D modeling softwares. Our work is inspired by the craftsman, who usually roughs out the holistic figure of the work first and elaborate the surface details subsequently. Specifically, we employ a 3D native diffusion model, which operates on latent space learned from latent set-based 3D representations, to generate coarse geometries with regular mesh topology in seconds. In particular, this process takes as input a text prompt or a reference image, and leverages a powerful multi-view (MV) diffusion model to generates multiple views of the coarse geometry, which are fed into our MV-conditioned 3D diffusion model for generating the 3D geometry, significantly improving robustness and generalizability. Following that, a normal-based geometry refiner is used to significantly enhance the surface details. This refinement can be performed automatically, or interactively with user-supplied edits. Extensive experiments demonstrate that our method achieves high efficiency in producing superior quality 3D assets compared to existing methods.

Contents

Environment Setup

Hardware We train our model on 32x A800 GPUs with a batch size of 32 per GPU for 7 days. The mesh refinement part is performed on a GTX 3080 GPU.
Setup environment :smiley: We also provide a Dockerfile for easy installation, see [Setup using Docker](./docker/README.md). - Python 3.10.0 - PyTorch 2.1.0 - Cuda Toolkit 11.8.0 - Ubuntu 22.04 Clone this repository. ```sh git clone https://github.com/wyysf-98/CraftsMan.git ``` Install the required packages. ```sh conda create -n CraftsMan python=3.10 -y conda activate CraftsMan conda install cudatoolkit=11.8 -c pytorch -y pip install torch==2.3.0 torchvision==0.18.0 pip install -r docker/requirements.txt ```

🎥 Video

Watch the video

3D Native DiT Model (Latent Set DiT Model)

We provide the training and the inference code here for future research. The latent set VAE model is heavily build on the same structure of Michelangelo. The latent set diffusion model is based on a DiT/Pixart-alpha and with 500M parameters.

Pretrained models

Currently, We provide the models with single view image as condition with DiT. We will consider open source the further models according to the real situation. If you run the inference.py without specifying the model path, it will automatically download the model from the huggingface model hub.

Or you can download the model manually:

## you can just manually get the model using wget:
wget https://huggingface.co/craftsman3d/craftsman-v1-5/blob/main/config.yaml
wget https://huggingface.co/craftsman3d/craftsman-v1-5/blob/main/model.ckpt

## or you can git clone the repo:
git lfs install
git clone https://huggingface.co/craftsman3d/craftsman-v1-5

If you download the models using wget, you should manually put them under the ckpts/craftsman-v1-5 directory.

Gradio demo

We provide gradio demos for easy usage.

python gradio_app.py --model_path ./ckpts/craftsman-v1-5

Inference

To generate 3D meshes from images folders via command line, simply run:

python inference.py --input eval_data --device 0 --model ./ckpts/craftsman-v1-5

For more configs, please refer to the inference.py.

Train from scratch

We provide our training code to facilitate future research. And we provide a data sample in data. If you want to get the full data, please fill in the form

Due to the costs associated with data deployment. If you could help share our work on social media in any form, we would be grateful. As a token of our appreciation, you will receive download links to exclusive content stored on AWS S3, which can achieve download speeds of 20-100 MB/s.

For more training details and configs, please refer to the configs folder.

### training the shape-autoencoder
python launch.py --config ./configs/shape-autoencoder/michelangelo-l768-e64-ne8-nd16.yaml \
                 --train --gpu 0

### training the image-to-shape diffusion model
python launch.py --config .configs/image-to-shape-diffusion/clip-dino-rgb-pixart-lr2e4-ddim.yaml \
                 --train --gpu 0

2D Normal Enhancement Diffusion Model (TBA)

We are diligently working on the release of our mesh refinement code. Your patience is appreciated as we put the final touches on this exciting development." 🔧🚀

You can also find the video of mesh refinement part in the video.

❓Common questions

Q: Tips to get better results.

  1. Due to limited resources, we will gradually expand the dataset and training scale, and therefore we will release more pre-trained models in the future.
  2. Just like the 2D diffusion model, try different seeds, adjust the CFG scale or different scheduler. Good Luck.
  3. We will provide a version that conditioned on the text prompt, so you can use some positive and negative prompts.

💪 ToDo List

🤗 Acknowledgements

📑License

CraftsMan-v1-5 shares the same license with the SD15, which is under creativeml-openrail-m. If you have any questions about the usage of CraftsMan, please contact us first.

📖 BibTeX

@misc{li2024craftsman,
title         = {CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner}, 
author        = {Weiyu Li and Jiarui Liu and Hongyu Yan and Rui Chen and Yixun Liang and Xuelin Chen and Ping Tan and Xiaoxiao Long},
year          = {2024},
archivePrefix = {arXiv preprint arXiv:2405.14979},
primaryClass  = {cs.CG}
}