HRI-EU / GDL4DesignApps

GNU General Public License v3.0
9 stars 4 forks source link

Point2FFD at the time of inference? #1

Closed asadabbas09 closed 2 years ago

asadabbas09 commented 2 years ago

Thanks for sharing this amazing work.

How do I use a trained Point2FFD network for inference to generate sharp geometry shown in Fig. 8 in the paper? Do I need to provide template mesh at inference time? Currently, I'm using the following code from provided notebook to load the trained network and perform inference but it reconstructs shrink wrapped meshes (which were generated during preprocessing step).

## Reconstruct the 3Dpoint clouds
sess, S_in, Z, S_out, feat_layer, pc_size,\
dpout, gamma_n, latt_def, flags = DesignApps.import_net_graph(
                                        'point2ffd_training_config.py', GPUid=-1)
pcs = DesignApps.Z_to_pointcloud('point2ffd_training_config.py', sess, S_out, Z,
                                Zint, flags, dpout=dpout, gamma_n=gamma_n, latt_def=latt_def, GPUid=-1)[0]
thrios commented 2 years ago

Dear @asadabbas09,

you are right! For reconstructing polygonal meshes you need to provide the template mesh. To generate Fig. 8 of the paper, we applied the standard FFD on the mesh template taken from ShapeNetCore considering the deformed lattice predicted by the network. We can't add the mesh files to the repository, but we can update the example and indicate the IDs of the shapes utilized in the paper.

asadabbas09 commented 2 years ago

Thanks, @thrios So we need to get predicted V_batch from Z_to_pointcloud and apply it to our mesh template to generate Fig. 8? https://github.com/HRI-EU/GDL4DesignApps/blob/dbc6a7da24e9272491a1ce5ab9b1ddec93e63d4c/gdl4designapps/designapps.py#L525 Do you have an example of how do I apply the standard FFD on template mesh?

thrios commented 2 years ago

Hi @asadabbas09 ,

yes, that's the right procedure. The [process to parameterize the meshes is the same as utilized for the templates, and also described in the paper from Sederberg & Perry "Free-Form Deformation of Solid Geometric Models" (1986). Nevertheless, we will add a specific example for generating shapes with Point2FFD to our set of examples and the functions for utilizing FFD to the current set of pre-processing functions.

asadabbas09 commented 2 years ago

Thanks, @thrios

Also, I used the following workflow to train Point2FFD, can you please have a look if it is the correct way?

  1. Data preprocessing as described in section 3.2 of the paper using a rectangular box and shrink wrap algorithm.
  2. Training network using the following config file.
    confignet = {
        'net_id': 'point2ffd_training',                # ID of the network (the same name is utilized for the network directory)
        'dataset': ['car_shrink/xyz'], # List of the directories that contain the geometric data
        'probsamp': [None],                   # List of the directories that contain the files (*.dat) with the point sampling probability
        'shapelist': [10],                   # List with the number/names of the shapes to be sampled
        'out_data':  None,                     # Output directory to save the network files. A folder <net_id> will be created in <out_data>
        'training_batch_size': 1,             # Batch size utilized for training the model
        'test_batch_size': 1,                 # Batch size utilized for testing the model
        'pc_size': 5058,                       # Point cloud size
        'latent_layer': 128,                   # Size of the latent layer
        'encoder_layers': [1024, 512, 256, 128, 64, 32], # List containing the number of feature per convolutional layer, **apart from the last layer**
        'decoder_layers': [32, 64, 128, 256, 512, 1024],          # List containing the number of feature per fully connected layer, **apart from the last layer**
        'l_rate': 5e-5,                        # Learning rate for the AdamOptimizer algorithm
        'epochs_max': 5000,                     # Maximum number of epochs
        'stop_training': 1e-06,                # Convergence criteria for the mean loss value
        'frac_training': 0.9,                  # Fraction of the data set utilized for training
        'autosave_rate': 10,                   # Interval (epochs) for saving the network files
        'alpha1': 1e3,#1e3,                         # Scalar multiplier applied to the shape reconstruction loss (PC-VAE)
        'alpha2': 1e-3,                        # Scalar multiplier applied to the shape Kullback-Leibler Divergence (PC-VAE)
        'dpout': 1.0,                           # Dropout ratio utilized for training the PC-VAE
        'temp_list': ['car_shrink/xyz/model_normalized7.xyz'],# 'car_shrink/xyz/model_normalized15.xyz', 'car_shrink/xyz/model_normalized25.xyz'],#,'template/template_hull2.xyz'],
        'ffd_lmn': [16,6,6],
        'class_layers': [32, 16],
        'sigma_n': 2.39,
        'gamma_n': 1.00,}
  3. Do training and reconstruction:
    archt.point2ffd_training("point2ffd_config.py", GPUid=0)
    archt.reconstruction_losses("point2ffd_config.py", GPUid=0)
  4. Load learned latent representations to perform inference or interpolation.
    
    data_pc_ae = np.array(pd.read_csv("test_pc-ae/network_verification/network_verification.dat"))
  5. Load network and weights
  6. Get V_batch and original template mesh **model_normalized7.obj** and perform standard FFD on template mesh to get final deformation.
thrios commented 2 years ago

Dear @asadabbas09 ,

we added a generic example for generating polygonal meshes with Point2FFD and the functions for performing free-form deformation.

The outline of the procedure is basically the same as you described above.

asadabbas09 commented 2 years ago

Thanks @thrios much appreciated.