Closed sean-xr closed 11 months ago
Hi @sean-xr , Thanks for your nice words and for reaching out!
You are correct: in the code, the GT correspondence is assumed to be the identity at training time. The reason is that we trained the network on SMPL models, all with the same vertex order, and so we can use this simplified version.
Having a code dealing with arbitrary GT correspondence would be more general (e.g., to train on remeshed versions of FAUST and SCAPE datasets), but for our experiments, it was not needed.
Let me know if this clarifies your doubts :)
Best!
Thanks for the prompt answer. It clears my doubt, wish you all the best
Dear Author,
Thanks for the great work! The code is well organized and the documentation is really well written as well.
However, I have one difficulty in understanding your code, in train_basis.py, the loss is written as: eucl_loss = torch.sum(torch.square(torch.matmul(s_max_matrix, torch.transpose(pc_B,1,2)) - torch.transpose(pc_B,1,2)))
While in the paper, the same loss function is formulated as:
So why are we assuming the ground-truth correspondence in the code to be identity?
Thanks for the answer.