oawiles / X2Face

Pytorch code for ECCV 2018 paper
MIT License
247 stars 59 forks source link

Two Stage Training #5

Closed cscyangj closed 5 years ago

cscyangj commented 6 years ago

How to run two stage training as described in paper?

oawiles commented 6 years ago

We haven't released this but may do it in the future.

KeyKy commented 5 years ago

@oawiles I want to ask some question about the The second stage training

  1. Did you only use an identity loss? Did you use an adversarial loss?
  2. How do you select driven frames in training? Did you select them from video with same identity to train the model?
oawiles commented 5 years ago
  1. We didn't use an adversarial loss.
  2. In the first stage: it's from the same video to use a photometric loss. In the second stage (in addition to the photometric loss), we use an additional loss. In this case, we use a different video but then only a content loss. Thus in the second stage for a given input image we would use two driving videos. One would be from the same video and have the photometric + content loss, the second would be from a different video and have the content loss.

Please look at the paper for a more detailed explanation.

Hope this helps.

mrgloom commented 5 years ago

What is photometric loss? is it just L1 loss on pixels?

nihaomiao commented 4 years ago

Hi, we want to compare our model with X2Face in other datasets. Could you release Two-stage training code so we can compare results fairly? @oawiles

oawiles commented 3 years ago

Hi,

Unfortunately, we do not plan on releasing that model. However, as stated in the paper, there is a difference but it is not that large -- the large failure cases are consistent between both training procedures. I think if you have a clear improvement over the basic model then it should be fine to say that it improves over X2Face.

Hope that helps, Olivia

On Sun, 22 Nov 2020 at 05:37, Haomiao Ni notifications@github.com wrote:

Hi, we want to compare our model with X2Face in other datasets. Could you release Two-stage training code so we can compare results fairly? @oawiles https://github.com/oawiles

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/oawiles/X2Face/issues/5#issuecomment-731701965, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABQ7Q46NX6CJWEFSR5NRUPDSRCPSDANCNFSM4F6LSFKA .

oawiles commented 3 years ago

We use a photometric loss to describe an 'image' based loss. It can be either an L1 loss or an L1 + Perceptual loss in this case.

On Mon, 29 Jul 2019 at 10:32, mrgloom notifications@github.com wrote:

What is photometric loss? is it just L1 loss on pixels?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/oawiles/X2Face/issues/5?email_source=notifications&email_token=ABQ7Q422SYUHQXI6PEUDXS3QB22MJA5CNFSM4F6LSFKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3AE45Q#issuecomment-515919478, or mute the thread https://github.com/notifications/unsubscribe-auth/ABQ7Q43STJXXRXBQNWRAVP3QB22MJANCNFSM4F6LSFKA .