spurra / vae-hands-3d

Code to evaluate model of paper "Cross-modal Deep Variational Hand Pose Estimation"
https://ait.ethz.ch/projects/2018/vae_hands/
GNU General Public License v3.0
121 stars 23 forks source link

How many losses you are using for training? #14

Closed biswassanket closed 5 years ago

biswassanket commented 5 years ago

Can you please mention how many losses you are using for training? Also from your paper I could not understand apart from the MSE loss used for cross reconstruction do you use any self reconstruction (RGB to RGB) loss during training?

spurra commented 5 years ago

For each modality, we have one MSE loss and the KL divergence. We have tried the reconstruction loss too (see Table 1, variant 3 and 4). However it didn't improve performance much. It seems to help in the case of semi-supervised training (see Fig. 3)