Closed magneter closed 4 years ago
@magneter If you want to imitate other motions with other video input, you can run following codes:
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg \
--tgt_path ./assets/samples/refs/iPER/024_8_2 \
--bg_ks 13 --ft_ks 3 \
--has_detector --post_tune \
--save_res
Here, --src_path
is the source input image path, and --tgt_path
is the the directory path of frames extraed from the video.
Summarizing the steps as followings:
you need firstly choose a video file with human actions (it is recommended to firstly do human center crop of the video) . The ./assets/samples/refs/iPER/024_8_2
is an example.
then, use ffmpeg
or other tools to extract the video into frames, like ./assets/samples/refs/iPER/024_8_2
.
run the above codes, and replace the tgt_path with your directory containing video frames.
The results will saved in ./outputs/results/imitators
Also, we will make these 3 steps in an automatic way in the next version.
@StevenLiuWen Thanks . The only problem which confuse me is wether I need train a new model with different 'tgt_path' in advance. you wrote a good reademe file and the steps are clear. I tested the demo with downloading some pre-trained models ,so I thought I could only transfer input images to actions of mixamo`s , If I insist on using the pre-trained model that you provided.
You mean ,I can change any '--src_path' and '--tgt_path' without training ? It is great. Good job, waiting HD version.
Hi,I have tested the repo and I can transfer the demo images with the pre_trained model which was trained by mixamo. But it seems that ,the pre-trained model could only transfer images to actions of mixamo. How could I modify the network to get a common model ,to accept any input and any base action without re-train a different model ?