Implementation of MeshGPT, SOTA Mesh generation using Attention, in Pytorch
Will also add text conditioning, for eventual text-to-3d asset
Please join if you are interested in collaborating with others to replicate this work
Update: Marcus has trained and uploaded a working model to 🤗 Huggingface!
StabilityAI, A16Z Open Source AI Grant Program, and 🤗 Huggingface for the generous sponsorships, as well as my other sponsors, for affording me the independence to open source current artificial intelligence research
Einops for making my life easy
Marcus for the initial code review (pointing out some missing derived features) as well as running the first successful end-to-end experiments
Marcus for the first successful training of a collection of shapes conditioned on labels
Quexi Ma for finding numerous bugs with automatic eos handling
Yingtian for finding a bug with the gaussian blurring of the positions for spatial label smoothing
Marcus yet again for running the experiments to validate that it is possible to extend the system from triangles to quads
Marcus for identifying an issue with text conditioning and for running all the experiments that led to it being resolved
$ pip install meshgpt-pytorch
import torch
from meshgpt_pytorch import (
MeshAutoencoder,
MeshTransformer
)
# autoencoder
autoencoder = MeshAutoencoder(
num_discrete_coors = 128
)
# mock inputs
vertices = torch.randn((2, 121, 3)) # (batch, num vertices, coor (3))
faces = torch.randint(0, 121, (2, 64, 3)) # (batch, num faces, vertices (3))
# make sure faces are padded with `-1` for variable lengthed meshes
# forward in the faces
loss = autoencoder(
vertices = vertices,
faces = faces
)
loss.backward()
# after much training...
# you can pass in the raw face data above to train a transformer to model this sequence of face vertices
transformer = MeshTransformer(
autoencoder,
dim = 512,
max_seq_len = 768
)
loss = transformer(
vertices = vertices,
faces = faces
)
loss.backward()
# after much training of transformer, you can now sample novel 3d assets
faces_coordinates, face_mask = transformer.generate()
# (batch, num faces, vertices (3), coordinates (3)), (batch, num faces)
# now post process for the generated 3d asset
For text-conditioned 3d shape synthesis, simply set condition_on_text = True
on your MeshTransformer
, and then pass in your list of descriptions as the texts
keyword argument
ex.
transformer = MeshTransformer(
autoencoder,
dim = 512,
max_seq_len = 768,
condition_on_text = True
)
loss = transformer(
vertices = vertices,
faces = faces,
texts = ['a high chair', 'a small teapot'],
)
loss.backward()
# after much training of transformer, you can now sample novel 3d assets conditioned on text
faces_coordinates, face_mask = transformer.generate(
texts = ['a long table'],
cond_scale = 8., # a cond_scale > 1. will enable classifier free guidance - can be placed anywhere from 3. - 10.
remove_parallel_component = True # from https://arxiv.org/abs/2410.02416
)
If you want to tokenize meshes, for use in your multimodal transformer, simply invoke .tokenize
on your autoencoder (or same method on autoencoder trainer instance for the exponentially smoothed model)
mesh_token_ids = autoencoder.tokenize(
vertices = vertices,
faces = faces
)
# (batch, num face vertices, residual quantized layer)
At the project root, run
$ cp .env.sample .env
[x] autoencoder
face_edges
directly from faces and vertices[ ] transformer
[x] trainer wrapper with hf accelerate
[x] text conditioning using own CFG library
[x] hierarchical transformers (using the RQ transformer)
[x] fix caching in simple gateloop layer in other repo
[x] local attention
[x] fix kv caching for two-staged hierarchical transformer - 7x faster now, and faster than original non-hierarchical transformer
[x] fix caching for gateloop layers
[x] allow for customization of model dimensions of fine vs coarse attention network
[x] figure out if autoencoder is really necessary - it is necessary, ablations are in the paper
[ ] make transformer efficient
[ ] speculative decoding option
[ ] spend a day on documentation
@inproceedings{Siddiqui2023MeshGPTGT,
title = {MeshGPT: Generating Triangle Meshes with Decoder-Only Transformers},
author = {Yawar Siddiqui and Antonio Alliegro and Alexey Artemov and Tatiana Tommasi and Daniele Sirigatti and Vladislav Rosov and Angela Dai and Matthias Nie{\ss}ner},
year = {2023},
url = {https://api.semanticscholar.org/CorpusID:265457242}
}
@inproceedings{dao2022flashattention,
title = {Flash{A}ttention: Fast and Memory-Efficient Exact Attention with {IO}-Awareness},
author = {Dao, Tri and Fu, Daniel Y. and Ermon, Stefano and Rudra, Atri and R{\'e}, Christopher},
booktitle = {Advances in Neural Information Processing Systems},
year = {2022}
}
@inproceedings{Leviathan2022FastIF,
title = {Fast Inference from Transformers via Speculative Decoding},
author = {Yaniv Leviathan and Matan Kalman and Y. Matias},
booktitle = {International Conference on Machine Learning},
year = {2022},
url = {https://api.semanticscholar.org/CorpusID:254096365}
}
@misc{yu2023language,
title = {Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation},
author = {Lijun Yu and José Lezama and Nitesh B. Gundavarapu and Luca Versari and Kihyuk Sohn and David Minnen and Yong Cheng and Agrim Gupta and Xiuye Gu and Alexander G. Hauptmann and Boqing Gong and Ming-Hsuan Yang and Irfan Essa and David A. Ross and Lu Jiang},
year = {2023},
eprint = {2310.05737},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
@article{Lee2022AutoregressiveIG,
title = {Autoregressive Image Generation using Residual Quantization},
author = {Doyup Lee and Chiheon Kim and Saehoon Kim and Minsu Cho and Wook-Shin Han},
journal = {2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
pages = {11513-11522},
url = {https://api.semanticscholar.org/CorpusID:247244535}
}
@inproceedings{Katsch2023GateLoopFD,
title = {GateLoop: Fully Data-Controlled Linear Recurrence for Sequence Modeling},
author = {Tobias Katsch},
year = {2023},
url = {https://api.semanticscholar.org/CorpusID:265018962}
}