LizhenWangT / StyleAvatar

Code of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
BSD 2-Clause "Simplified" License
415 stars 45 forks source link

Faceverse v1 vs v3 #30

Open oijoijcoiejoijce opened 1 year ago

oijoijcoiejoijce commented 1 year ago

Out of curiosity - why are you using faceversev3 instead of faceversev1? faceversev1 has more detailed render which would make StyleAvatar learn better and have better tracking - what is teh rational for not using it and using v3 instead?

Inferencer commented 1 year ago

I was thinking the same for v2 can we can an answer on this? I don't mind if it's just the face rather than full head @LizhenWangT

LizhenWangT commented 1 year ago

v2 can also work, but the eye balls need to be replaced by a small point on the 2D images. But the v1 is too slow in the preprocessing of the video.

Inferencer commented 1 year ago

Oh that's great, It's just for lip-sync i'm using it for so I can mask out most of the face as long as I keep the lips and cheeks I'll be happy. Is there a simple process for switching over to v2?

LizhenWangT commented 1 year ago

Oh that's great, It's just for lip-sync i'm using it for so I can mask out most of the face as long as I keep the lips and cheeks I'll be happy. Is there a simple process for switching over to v2?

Just replace the npy file in data/faceverse_v3_6_s.npy and delete the code related with the eyeballs & masks in FaceVerse.py (maybe also the uv-related code).