AliaksandrSiarohin / first-order-model

This repository contains the source code for the paper First Order Motion Model for Image Animation
https://aliaksandrsiarohin.github.io/first-order-model-website/
MIT License
14.54k stars 3.22k forks source link

inference problem!!! #436

Open Zenobia7 opened 3 years ago

Zenobia7 commented 3 years ago

@AliaksandrSiarohin

Hello, I would like to ask what are the conditions for selecting the driving video? I use the VOX video data set to make reasoning, PBP-ES08AVQ is the video ID I choose, and the picture is the face data set of FFHQ. The conversion result will show a distorted face, which looks like there is no alignment. What special processing is needed to drive the video? The processing steps of my whole reasoning process are as follows:

  1. From the provided video-preprocessing model, run load_videos.py to download the video and clip it

  2. Run demo. Py from first-order-model

or

  1. From the provided video-preprocessing model, you will download the video and run crop_vox.py to crop the video frames

  2. Run demo. Py from first-order-model

or

  1. Crop the video using Python crop-video.py --inp some_youtube_video.mp4 in first-order-model

  2. Run demo. Py from first-order-model

I have tried the above three methods, but I still cannot produce the effect of model example, so I would like to ask what this step should be? Looking forward to your recovery

AliaksandrSiarohin commented 3 years ago

You can try to add find best frame in demo.

Zenobia7 commented 3 years ago

@AliaksandrSiarohin Thank you for your reply, but I still don't know how to choose the appropriate video frame. Is the head kept as still as possible, the mouth shape is exaggerated, and there is no beard, something like that? How do you choose video frames? Looking forward to your reply

Zenobia7 commented 3 years ago

@AliaksandrSiarohin Sorry to bother you, but I also want to ask if my processing steps are correct and do not need other face alignment