carolineec / EverybodyDanceNow

Motion Retargeting Video Subjects
Other
689 stars 139 forks source link

Can someone explain the file structure/tree for training? #5

Open justinjohn0306 opened 4 years ago

justinjohn0306 commented 4 years ago

Can someone explain the file structure/tree for training? I'm really trying to get this working...I've tried other repos but they're not as good as this one. Any help would be appreciated.

ipwefpo commented 3 years ago

Hi, It's my experience when I implement this project, hope it can help .

First, you need openpose to get source&target video's frames & their keypoints file : https://github.com/CMU-Perceptual-Computing-Lab/openpose

Then, check graph_train.py in data_pre folder, check the instruction of parameters in file, you may find out which folder you should place the training data . ( graph_train.py is to prepare file, the output is the data for training . )

Other concepts can take pix2pixHD for reference : https://github.com/NVIDIA/pix2pixHD

hshreeshail commented 2 years ago

@ipwefpo Could you explain this in more detail. Here is my understanding about how to set up the training data: train_img, train_label, train_facetexts128 are available from the download link on the official website https://carolineec.github.io/everybody_dance_now/. But I see two more directories keypoints, original_frames in the sample_data directory. So, is the purpose of graph_train.py, to generate these two directories from the downloaded data? Also, what is the content of original_frames?