LizhenWangT / StyleAvatar

Code of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
BSD 2-Clause "Simplified" License
406 stars 44 forks source link

Any advice on improving the glitchiness? #9

Open oijoijcoiejoijce opened 1 year ago

oijoijcoiejoijce commented 1 year ago

Thanks again for the great work. Unfortunately; the output is still quite glitchy as mentioned in the paper. Do you have any recommendations on how to improve that? For example:

alchemician commented 1 year ago

same issue here, i tried face reenactment on the video in the drive link, here is a 5 second output. This clip is taken from 30 to 35 seconds from the original video

https://drive.google.com/file/d/1gQnw2XaGBjql6ok_IxD9MUihTfTqixT2/view?usp=sharing

LizhenWangT commented 1 year ago

same issue here, i tried face reenactment on the video in the drive link, here is a 5 second output. This clip is taken from 30 to 35 seconds from the original video

https://drive.google.com/file/d/1gQnw2XaGBjql6ok_IxD9MUihTfTqixT2/view?usp=sharing

This jitter exceeded my expectations. The results shown in my paper are based on C++. So I don't test the resutls on the python code. You can try to add a smooth term to the predicted faceverse parameters to improve the effect.

LizhenWangT commented 1 year ago

Thanks again for the great work. Unfortunately; the output is still quite glitchy as mentioned in the paper. Do you have any recommendations on how to improve that? For example:

  • Less head movement?
  • maybe rendering just the face instead of the whole head (i.e. keeping head movement of the source video constant and only rendering the face)

Yes, less head movement or only transferring the mouth-related parameters will improve the results.

alchemician commented 1 year ago

@LizhenWangT any advice on which faceverse parameters and how to smooth?

LizhenWangT commented 1 year ago

@LizhenWangT any advice on which faceverse parameters and how to smooth?

pose, eye and exp

LizhenWangT commented 1 year ago

@alchemician I just find that it may be caused by different input latent codes. It's the batch size 1 for testing? It seems to be caused by input different latent codes. Use batch = 1 or change the code by latent = torch.randn(1, 64).repeat(args.batch, 1)