tensorflow / graphics

TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow
Apache License 2.0
2.75k stars 361 forks source link

Padding mesh data #544

Open mahaling opened 3 years ago

mahaling commented 3 years ago

I am trying to train the mesh segmentation code here in tensorflow graphics with my own dataset. My dataset has some meshes with varying vertex count and edge count. I tried to zero-pad the vertices and faces array for each mesh so that all the meshes in my dataset will have the same number of vertices and faces.

However, I am running into issue of

  1. not able to view the mesh from the colab ipython notebook. (My vertices are between 0 and 1, so I know without padding I can view them)
  2. Since the padding for faces did not allow any negative values (pytorch3d sets the padding value for faces to be -1 and I believe here I have to pad it with 0), I had to pad the faces with a 0 which obviously pointed to the first vertex in the list.

Can you someone clearly point a way to pad custom mesh datasets and feed it into the network. An example to do this is very much appreciated. Thanks in advance

NOTE: The network is training with this zero padded dataset, but not really learning anything.

yogeshhk commented 3 years ago

Zero padding Mesh data with 0 looks incorrect to me, as "0" itself is a valid mesh vertex value. Its not like padding in language-vector sequences with "-1". Any numerical value for mesh padding looks wrong to me. Any comments, corrections, suggestions?

mahaling commented 3 years ago

Thanks for that reply.. do you think you can shed some light on how to do the padding right?. Thanks! I kind of feel this is trivial, but somehow I am not able to get the proper definition of "zero-padding" from different libraries that I looked so far.

mahaling commented 3 years ago

On a second look I think I don't need to pad the data but should be taken care of by the dataset loader. I am able to view the meshes using the mesh_viewer class and that sounds about right to me w.r.t. data preparation. With is non-padded input data, the network training loss keeps fluctuating and the models that come out of this training seem to have learned nothing. I have tried varying the learning rate, batch size, etc. to see if something changes, but I am out of luck here.

Can anyone share pointers on what could have gone wrong and what all can I try to fix the training loss fluctuation?.

My worst scenario is to get back to the roots of data preparation if nothing works. Still my data would not have all the meshes with the same number of vertices unless I opt for some mesh decimation approaches.