LizhenWangT / StyleAvatar

Code of SIGGRAPH 2023 Conference paper: StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
BSD 2-Clause "Simplified" License
407 stars 44 forks source link

How to make a full video with original background from the generated bbox face images #6

Open alchemician opened 1 year ago

alchemician commented 1 year ago

Hey - I am able to reproduce results from the paper, great work!

I am currently looking into generating a full video with the original background. Faceverse crops the video to the face to train Styleavatar and now I am looking to attach the generated face images back onto original video. Any ideas or suggestions to do this would be greatly appreciated

oijoijcoiejoijce commented 1 year ago

+1

LizhenWangT commented 1 year ago

Please refer to the Line 157~162 of data_reader.py. You can modify the crop_size (or remove them) to change the range of outimg. And you can save the parameters self.crop_center[1] - self.half_length ... to put the cropped images back using these parameters.

LizhenWangT commented 1 year ago

You can also add the background of the border part to the input images to make the transition at the border less abrupt.

oijoijcoiejoijce commented 1 year ago

What do you mean by background of the border part?

LizhenWangT commented 1 year ago

Like using images like this as input

t

oijoijcoiejoijce commented 1 year ago

Image like this for the inference or for the training? (I'm assuming image like these would be in the render directory?)