Closed ss892714028 closed 3 years ago
Thanks for your interest.
The problem you encoutered seems to be the domain adaption or transfer learning. It is common in many areas that models perform worse on other datasets instead of the trained one (ModelNet40 here). It might help to pre-train the model on your own dataset with a classification task before applying it to generate embeddings.
Thanks for your interest.
The problem you encoutered seems to be the domain adaption or transfer learning. It is common in many areas that models perform worse on other datasets instead of the trained one (ModelNet40 here). It might help to pre-train the model on your own dataset with a classification task before applying it to generate embeddings.
Thank you for the reply.
So the problem is not due to my preprocessing steps, I will try fine-tuning the pre-trained weights and see how that works!
Thank you again for your clarification :).
I am closing up this issue, thank you for your support. :)
Great research and implementation, thank you so much for making this project open-source.
First of all, this is my first project in 3d domain and I am not familiar with this field... So sorry if my question sounds dumb to you. :(
I am using MeshNet to generate embeddings (256 dimension MLP layer in the end) for 3d Meshes and use L2 Norm to compare the distance between the embeddings of 3d Meshes.
However, with the provided pre-trained weights (trained with ModelNet40), only the embeddings generated by the 3d meshes within the ModelNet40 dataset are accurate. I define "accurate" as: if the two 3d meshes have similar structures to my eyes, their embeddings are close to each other in euclidian space, vice versa.
I am suspecting that this is caused by the pre-processing steps before the deep learning model inference phase. I suspect that there might be a specific requirement to the input 3d Mesh that I am not aware of.
Before generating embeddings. I
Any suggestion would be very helpful. Thank you for your time.