KwaiVGI / LivePortrait

Make one portrait alive!
https://liveportrait.github.io
MIT License
2.33k stars 174 forks source link

How to make a driving video? #3

Open 2502128021 opened 4 days ago

2502128021 commented 4 days ago

as the title describes, can u explain how to make a driving video like the samples in ./assets/examples/driving

Inferencer commented 3 days ago

Record yourself with little shoulder movement, in a video editing app change the resolution to 512x512, then zoom in on the face so it takes up 60-90% of the screen. It should output a 512x512 of just your face.

Best to keep it 25fps.

2502128021 commented 3 days ago

thx for your cool work and reply, I'll try what you suggested. but I still have some questions: 1.why the driving video should be with little shoulder movement, since you have offered --no-flag-stiching. if I set it, the ref image should move like driving video, is that right? 2.why need to zoom in on the face, you have mentioned in your paper, you made trainingset like TalkingHead-1KH, so face ratio should not as big as 60-90%, why there is a difference between training and inference? hope to get your reply!

justinjohn0306 commented 3 days ago

thx for your cool work and reply, I'll try what you suggested. but I still have some questions: 1.why the driving video should be with little shoulder movement, since you have offered --no-flag-stiching. if I set it, the ref image should move like driving video, is that right? 2.why need to zoom in on the face, you have mentioned in your paper, you made trainingset like TalkingHead-1KH, so face ratio should not as big as 60-90%, why there is a difference between training and inference? hope to get your reply!

you can run this script for making a driving video: https://github.com/KwaiVGI/LivePortrait/blob/main/video2template.py

cleardusk commented 3 days ago

We are planning to provide functions to create a driving video from a raw video. Stay tuned! @2502128021 @justinjohn0306