svip-lab / impersonator

PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
https://svip-lab.github.io/project/impersonator
Other
1.72k stars 317 forks source link

Could I transfer any video with a common pre-trained model? #14

Closed magneter closed 4 years ago

magneter commented 4 years ago

Hi,I have tested the repo and I can transfer the demo images with the pre_trained model which was trained by mixamo. But it seems that ,the pre-trained model could only transfer images to actions of mixamo. How could I modify the network to get a common model ,to accept any input and any base action without re-train a different model ?

StevenLiuWen commented 4 years ago

@magneter If you want to imitate other motions with other video input, you can run following codes:

python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
    --src_path      ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg    \
    --tgt_path      ./assets/samples/refs/iPER/024_8_2    \
    --bg_ks 13  --ft_ks 3 \
    --has_detector  --post_tune  \
    --save_res

Here, --src_path is the source input image path, and --tgt_path is the the directory path of frames extraed from the video.

Summarizing the steps as followings:

  1. you need firstly choose a video file with human actions (it is recommended to firstly do human center crop of the video) . The ./assets/samples/refs/iPER/024_8_2 is an example.

  2. then, use ffmpeg or other tools to extract the video into frames, like ./assets/samples/refs/iPER/024_8_2.

  3. run the above codes, and replace the tgt_path with your directory containing video frames. The results will saved in ./outputs/results/imitators

Also, we will make these 3 steps in an automatic way in the next version.

magneter commented 4 years ago

@StevenLiuWen Thanks . The only problem which confuse me is wether I need train a new model with different 'tgt_path' in advance. you wrote a good reademe file and the steps are clear. I tested the demo with downloading some pre-trained models ,so I thought I could only transfer input images to actions of mixamo`s , If I insist on using the pre-trained model that you provided.

You mean ,I can change any '--src_path' and '--tgt_path' without training ? It is great. Good job, waiting HD version.