lzhengning / SubdivNet

Subdivision-based Mesh Convolutional Networks.
MIT License
251 stars 34 forks source link

Segmentation on my own dataset #9

Closed goyallon closed 3 years ago

goyallon commented 3 years ago

Hi @lzhengning, Thanks for releasing this code.

I would like to test your pretrained segmentation model on .obj meshes without groundtruth, do you think it is easily achievable ? Is there a particular tool you use to label the meshes and get the .json files present in your repository ?

Kind regards.

lzhengning commented 3 years ago

Hi, @goyallon,

To make predictions on other datasets, first remesh them using one of the tools.

The .json files is not necessary because it just helps to project the segmentation results back to the input meshes (before remeshed). If segmentation on the input meshes is also what you want, you could also write some code to do the projection and modify the test script and dataloader.

When I prepared the datasets, I used trimesh.proximity.closest_point to find the nearest triangle in the input of every face center in the remeshed. Hope this would help you.

liang3588 commented 3 years ago

Sorry, but the "trimesh.proximity.closest_point" you mentioned cannot be opened, could you please provide the original Human Body, COSEG datasets with labeled on faces? Thank you very much!!!

lzhengning commented 3 years ago

@liang3588 , I have updated the hyperlink to the Trimesh document.

The original datasets can be found from their websites. Both of the them have labeled faces.

liang3588 commented 3 years ago

Thank you very much, but could you please provide the Human Body, COSEG datasets with 500 faces and the labels on faces?

lzhengning commented 3 years ago

@liang3588,

Sorry that I am a little bit confused. Do you mean the datasets provided by MeshCNN?

unw9527 commented 3 years ago

Hi @lzhengning , I wonder how to create the .json file if I test on my own data (but maybe only one .obj file), since I want to visualize the result (see different colors produced by segmentation) and also see the accuracy. Can this be achieved easily?

If this is too troublesome, any ideas how I can simply skip the .json file and still get the colorized .ply file? Sorry that I am not quite sure what you mean in your previous comments. Thank you.

lzhengning commented 3 years ago

hi @unw9527 ,it won't be tricky to segment on your data.

The .json file stores face labels of the original shape and the remeshed shape. And it is used to train SubdivNet. There are three keys in a json file,

If you do not want the accuracy, just do some modification to the source codes. Here are some functions that can help you,