Open 2502128021 opened 4 days ago
Record yourself with little shoulder movement, in a video editing app change the resolution to 512x512, then zoom in on the face so it takes up 60-90% of the screen. It should output a 512x512 of just your face.
Best to keep it 25fps.
thx for your cool work and reply, I'll try what you suggested. but I still have some questions: 1.why the driving video should be with little shoulder movement, since you have offered --no-flag-stiching. if I set it, the ref image should move like driving video, is that right? 2.why need to zoom in on the face, you have mentioned in your paper, you made trainingset like TalkingHead-1KH, so face ratio should not as big as 60-90%, why there is a difference between training and inference? hope to get your reply!
thx for your cool work and reply, I'll try what you suggested. but I still have some questions: 1.why the driving video should be with little shoulder movement, since you have offered --no-flag-stiching. if I set it, the ref image should move like driving video, is that right? 2.why need to zoom in on the face, you have mentioned in your paper, you made trainingset like TalkingHead-1KH, so face ratio should not as big as 60-90%, why there is a difference between training and inference? hope to get your reply!
you can run this script for making a driving video: https://github.com/KwaiVGI/LivePortrait/blob/main/video2template.py
We are planning to provide functions to create a driving video from a raw video. Stay tuned! @2502128021 @justinjohn0306
as the title describes, can u explain how to make a driving video like the samples in ./assets/examples/driving