zhengkw18 / face-vid2vid

Unofficial implementation of the paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" (CVPR 2021 Oral)
155 stars 17 forks source link

Keypoint prior loss function #7

Closed hanweikung closed 1 year ago

hanweikung commented 2 years ago

Thank you for your work. May I ask why your keypoint prior loss function is slightly different from the one in the original paper?

In the paper (A.2), the keypoint prior loss function is:

Screenshot from 2021-12-16 16-02-13

However, yours in losses.py is:

loss = (
    torch.max(0 * dist_mat, self.Dt - dist_mat).sum((1, 2)).mean()
    + torch.abs(kp_d[:, :, 2].mean(1) - self.zt).mean()
    - kp_d.shape[1] * self.Dt
)

I was wondering why you subtracted kp_d.shape[1] * self.Dt in the end.

zhengkw18 commented 2 years ago

For the distance matrix dist_mat, its diagonal is zero. So self.Dt - dist_mat produced many self.Dt in its diagonal, which are summed into the loss and need to be thrown away.

hanweikung commented 2 years ago

I see now. Thank you for your prompt answer!