svip-lab / impersonator

PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
https://svip-lab.github.io/project/impersonator
Other
1.72k stars 317 forks source link

No results #18

Closed MarioProjects closed 4 years ago

MarioProjects commented 4 years ago

After run some examples, I cannot find the ouput in the output_dir. Running runDetails.md examples

StevenLiuWen commented 4 years ago

@MarioProjects , Hi, The runDeailts.md illustrates the meanings of some important flags. To be simplified, we take the following examples

python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/  \
    --src_path      ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg    \
    --tgt_path      ./assets/samples/refs/iPER/024_8_2    \
    --bg_ks 13  --ft_ks 3 \
    --has_detector  --post_tune  \
--save_res  --output_dir  /data/test_results

Here, we point the output_dir to /data/test_results, and thereby results are saved in /data/test_results/imitators, and the synthesized images are named as pred_xxxx.jpg. We only save them as images, and in the next version, we will make the final results as a video (using the tools, such as ffmepg).