svip-lab / impersonator

PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
https://svip-lab.github.io/project/impersonator
Other
1.73k stars 318 forks source link

About details: why you use 1 discriminator(4 layers) and use three label(-1,0,1) ? #47

Closed dypromise closed 4 years ago

dypromise commented 4 years ago

Hi, author! I have read your paper and your code, and i have some questions which confused me. recently pix2pixHD is famous, why you just use one-scale discriminator and using three labels(-1, 0, 1) instead of using 2-scale Discriminator? and using three label( -1, 0, 1) have advantages than two labels style(e.g. in pix2pixHD, they use 0,1 for fake and real)? thanks for your reply!

dypromise commented 4 years ago

another difference, your work use 4 layers in D, while in pix2pixHD, they use 3 layers but two-scale. Do you have any reason about your design?

StevenLiuWen commented 4 years ago

Hi, @dypromise We follow the LSGAN-V2 (the journal version https://arxiv.org/pdf/1712.06391.pdf, see 3.4 Parameters Selection in this paper). In this version, they found that the label (-1, 0, 1) is better than the vanilla LSGAN where the label is (0, 0, 1).