buaacyw / MeshAnything

From anything to mesh like human artists. Official impl. of "MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers"
https://buaacyw.github.io/mesh-anything/
Other
1.97k stars 83 forks source link

A question #31

Closed Dumeowmeow closed 2 days ago

Dumeowmeow commented 1 week ago

Thank you for your great work! I have some small quesiotns. 1.Is it only necessary to input the point cloud during inference? For the interface of input mesh in readme, do we need to convert mesh to point cloud before input? 2.And, during the training process, why we need to train the encoder with mesh? It seems that the encoder is not used during inference. 3.During the training process, the input of the decoder is mesh feature and point feature.But when inference,the input of the decoder is only the point feature? Why different?

buaacyw commented 1 week ago
  1. Basicly the input to transformer is always point cloud. If you input the mesh, we will extract point cloud from the mesh as input.
  2. Yes. It's trained with mesh and we don't need encoder for inference.
  3. The mesh feature will be generated by transformer.