Written by Peihuan Wu, Jinghong Lin, Yutao Liao, Wei Qing and Yan Xu, including normalization and face enhancement parts.
We train and evaluate on Ubuntu 16.04, so if you don't have linux environment, you can set nThreads=0
in EverybodyDanceNow_reproduce_pytorch/src/config/train_opt.py
.
nyoki-mtl pytorch-EverybodyDanceNow
Lotayou everybody_dance_now_pytorch
Download vgg19-dcbb9e9d.pth.crdownload here and put it in ./src/pix2pixHD/models/
Download pose_model.pth here and put it in ./src/PoseEstimation/network/weight/
Source video can be download from here
Download pre-trained vgg_16 for face enhancement here and put in ./face_enhancer/
Put source video mv.mp4 in ./data/source/
and run make_source.py
, the label images and coordinate of head will save in ./data/source/test_label_ori/
and ./data/source/pose_souce.npy
(will use in step6). If you want to capture video by camera, you can directly run ./src/utils/save_img.py
Rename your own target video as mv.mp4 and put it in ./data/target/
and run make_target.py
, pose.npy
will save in ./data/target/
, which contain the coordinate of faces (will use in step6).
Run train_pose2vid.py
and check loss and full training process in ./checkpoints/
If you break the traning and want to continue last training, set load_pretrain = './checkpoints/target/
in ./src/config/train_opt.py
Run normalization.py
rescale the label images, you can use two sample images from ./data/target/train/train_label/
and ./data/source/test_label_ori/
to complete normalization between two skeleton size
Run transfer.py
and get results in ./results
cd ./face_enhancer
.prepare.py
and check the results in data
directory at the root of the repo (data/face/test_sync
and data/face/test_real
).main.py
to rain the face enhancer. Then run enhance.py
to obtain the results
cd
back to the root dir and run make_gif.py
to create a gif out of the resulting images.Ubuntu 16.04
Python 3.6.5
Pytorch 0.4.1
OpenCV 3.4.4