Open Zenobia7 opened 3 years ago
You can try to add find best frame in demo.
@AliaksandrSiarohin Thank you for your reply, but I still don't know how to choose the appropriate video frame. Is the head kept as still as possible, the mouth shape is exaggerated, and there is no beard, something like that? How do you choose video frames? Looking forward to your reply
@AliaksandrSiarohin Sorry to bother you, but I also want to ask if my processing steps are correct and do not need other face alignment
@AliaksandrSiarohin
Hello, I would like to ask what are the conditions for selecting the driving video? I use the VOX video data set to make reasoning, PBP-ES08AVQ is the video ID I choose, and the picture is the face data set of FFHQ. The conversion result will show a distorted face, which looks like there is no alignment. What special processing is needed to drive the video? The processing steps of my whole reasoning process are as follows:
From the provided video-preprocessing model, run load_videos.py to download the video and clip it
Run demo. Py from first-order-model
or
From the provided video-preprocessing model, you will download the video and run crop_vox.py to crop the video frames
Run demo. Py from first-order-model
or
Crop the video using Python crop-video.py --inp some_youtube_video.mp4 in first-order-model
Run demo. Py from first-order-model
I have tried the above three methods, but I still cannot produce the effect of model example, so I would like to ask what this step should be? Looking forward to your recovery