deepfakes / faceswap-model

Tweaking the generative model
148 stars 133 forks source link

Why do we use warped images as part of the loss function? #33

Open nmangaokar opened 5 years ago

nmangaokar commented 5 years ago

Im just trying to understand the intuition behind setting up the autoencoding train process as a function that maps warped A (or B) to original A (or B). I've been reading the paper: https://arxiv.org/pdf/1706.02932v2.pdf, but their motivations seem different.

Is it for the purpose of being able to produce the a face that maps to a feature/expression (expressed by the warping)?