buaacyw / MeshAnything

From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
https://buaacyw.github.io/mesh-anything/
Other
1.32k stars 51 forks source link

How to do text and images? #4

Open Tomobobo710 opened 1 week ago

Tomobobo710 commented 1 week ago

Is that not possible right now?

The input types are mesh and pc_normal, but this shows text and image:

image

Maybe I'm missing something.

buaacyw commented 1 week ago

The image/text to mesh is achieved by combining with 3D generation methods. We first obtain dense meshes from 3D generation methods and use them as input to our methods. Note that the shape quality of dense meshes should be high enough. Thus, feed-forward 3D generation methods may often produce bad results due to insufficient shape quality. We suggest using results from SDS-based pipelines (like DreamCraft3D) as the input of MeshAnything as they produce better shape quality.

yosun commented 5 days ago

most 3d gen AI models typically generate meshes with more than 800 faces and your system is limited to that?

which pipeline are you using?