lcosmo / LIMP

LIMP: Learning Latent Shape Representations with Metric Preservation Priors. (ECCV2020)
21 stars 6 forks source link

Quantity evaluation of the given datasets #1

Open mikecheninoulu opened 3 years ago

mikecheninoulu commented 3 years ago

Hi, thanks for your nice work! It's quite impressive!

I was going to follow your work and reproduce it. But after I successfully trained the model, I met some problems when I try to evaluate it (only on FAUST dateset so far):

  1. We strictly followed your setting for the training, 20000 epochs, 16 batch size, 256 latent space. But in your paper, the learning rate is 1e-04 while in your code, it's 2e-05. Could you help us to confirm it?

  2. The evaluative metrics (average geodesic distortion) of interpolation illustrated in Section 4.2 is unclear for us. Since intermediate poses are unavailable, there is no ground truth to measure this geodesic distortion. We don't know how to implement the further evaluation (as you reported in Table 1) after we trained the model.

  3. At last, the training time I used is 26 hours, way longer than you reported in the paper (4 hours). My platform is a single GPU of NVIDIA Tesla V100. I'm wondering if there any wrong with my setting.

So, could you share your evaluation protocol for us to better follow your work?

Best regards!

Ha0Tang commented 3 years ago

I retrained this model, the results can be reproduced but it also takes about 28 hours instead of 6 hours.