martinnormark / neural-mesh-simplification

Un-official and WIP Implementation of the Neural Mesh Simplification paper
MIT License
10 stars 0 forks source link

about training details #33

Closed SMY19999 closed 1 month ago

SMY19999 commented 1 month ago

Hi!

Thanks for your great contribution. It is really a novel method that is differential and fast!

I have read the paper and the code.

My question is how to train the model?

The number of vertex of the mesh in the dataset is not the same.

Given a mesh, has N vertices, so is the batch size N?

Do you train one mesh in one epoch? This means the epoch loss is only about this mesh.

If I'm right, I wonder that how the model can generate to out-of-training data with the different number of vertex and different topology?

Look forward to your reply. I'm really want to learn from this work!

martinnormark commented 1 month ago

I am not associated with the team who wrote the paper and is not at all expert in 3D, meshes etc. - I found the paper, read it many times and decided to try to implement it.

I have not completed the implementation, I have written a yolo training loop but not tried to run it yet 🙈

From what I understand and my own intuition, the model may converge on out-of-distribution data from its training strategy of using Point Sampler, Edge Predictor and Face Classifier in a way that:

It would probably struggle on meshes that are very complex or highly irregular compared to the training data. For example, the ABC dataset I am using here is relatively simple compared to e.g. game character meshes. So the trained weights from the industrial CAD meshes would probably not do well in gaming, and thus needs to be re-trained on the domain it needs to be applied in.

On the training part, the meshes, epochs and batch size are not connected in that way. Training happens in a number of loops (epochs) which will traverse items in the training set using a batch size. Read more here: https://machinelearningmastery.com/difference-between-a-batch-and-an-epoch/

SMY19999 commented 1 month ago

😮 I sincerely admire the diligent work and effort you have put in; it is truly inspiring.

I'm interested in mesh simplification in Mobile Game.

I have found out that I misunderstand the training process.

For the training loop, the basic data item is a mesh, output by self.__get__item__(). So the loss is computed within the mesh.

A Batch has batch_size meshes, and an epoch just iteratively processes ALL data within several Batches.

👍 Thanks again for your reply as well as the amazing work you have done. It really helps me a lot to understand this paper.