zhanglonghao1992 / One-Shot_Free-View_Neural_Talking_Head_Synthesis

Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"
Other
764 stars 143 forks source link

Few Questions on implementations #21

Open vinayak015 opened 2 years ago

vinayak015 commented 2 years ago

Pardon me for so many questions, but here it goes:

  1. Did you try generator with weight 0 (i.e without adversial loss) or with LSGAN?
  2. Did you try generator with spectral norm (mentioned in One-shot paper)
  3. Did you try separating features from the generator, because at test time it is being calculated for every frame.
  4. Did you used the weights for VGG19. Also, in FOMM they have used different VGG layers and in One shot they have used different?
  5. Did you try 2D motion field with 2D features?
  6. Were you able to make video conferencing part working? As I am working on that part too
  7. Did you use dataset other than voxceleb?: I suspect one-shot got better result because they used huge dataset for training.