Open minyoung-mia-Kim opened 4 years ago
Hi @mia-minyoung ,
Thanks for the kind words. First of all , what is the task you are trying to solve? Classification, segmentation, or something else (e.g., regression?)?
We handled fairly high resolution meshes in a follow-up work point2mesh, which is sort of limited by the GPU memory. For meshes with very high resolution (e..g, 40k+ faces), we "broke" the mesh into parts, and performed computations on each part separately.
But, this is again quite specific to the task you are trying to solve. For classification / segmentation it probably makes sense to simplify these meshes a bit.
-Rana
Thanks for your reply!
At first, I planned to use MeshCNN to segment facial features from 3D face mesh. I've got some segments with the other way but still considered using this model to build a GAN model(Specifically G2LGAN ). I thought transformation invariance of MeshCNN is attractive for learning mesh features regardless of vertices' position.
I also should've asked if MeshCNN would work generation tasks as well. I wonder this model work only for classification or segmentation.
And thank you for letting me know your follow-up work! Although I just looked through it, it looks amazing too! I'll check this work too.
-Minyoung
Hi,
First, thank you for opening this incredible model for the public. This work really fascinates me and motivates me.
I would like to use MeshCNN layer for my network, but I use high-resolution mesh having over 50000 vertices per mesh. So, I wonder MeshCNN would work with high-resolution mesh too, and if not, why the model is trained by low-resolution inputs and what the problem was.
Thank you for your reply in advance and again the work so awesome and wish to see further work!!