shaoanlu / faceswap-GAN

A denoising autoencoder + adversarial losses and attention mechanisms for face swapping.
3.37k stars 842 forks source link

read me to start for beginner #6

Closed ruah1984 closed 6 years ago

ruah1984 commented 6 years ago

Can you help to provide the idiot instruction to those new beginner ??

shaoanlu commented 6 years ago

The required dependencies are basically the same with deepfakes'. Despite I use jyputer notebook as the main script, which can be made into a .py file. I believe somebody will (or already) post tutorials on reddit. If you are stuck on installing python packages, my suggestion is to find a cloud platform providing instances, e.g., AWS with proper AMI, that have dlib and Keras etc. pre-installed, then you will be ready to move on.

To sum up, as long as you can run deepfakes' scripts on your machine, there will be little difficulty running my model.

ruah1984 commented 6 years ago

thanks for the advice, i had run deepfakes script a week with cloud plateform in floydhub. still stuck with some error message. i still thinking if i able to try another method to get the same result or not.

My question here if i have a training data ( crop image for celebrity A and B) , target as you mentioned in the readme means the target video (Frame) to be convert back ??. so basically , i will just need to run FaceSwap_GAN_github.ipynb with data set ./TE/ ( training folder) and ./SH/ as (Target folder) ?? if i already have all the training library

shaoanlu commented 6 years ago

FaceSwap_GAN_github.ipynb mainly consists of 2 part:

  1. Train a model.
  2. Take a video as input, use the trained model to generate swapped face for each frame, then output a video. By default it transforms faces from source face (in ./SH) to target face (in ./TE).

Notice that If you run "all" cells in the jupyter notebook, it actually generate video twice. One in section "Making video clips w/o face alignment" and another in "Making video clips w/ face alignment". I recommend to run only the former one and ignore the latter to avoid possible bugs.

Nelthirion commented 6 years ago

By the way, How many iterations it took for your GAN model to train? the AutoEncoder doesn't take long (mainly 15000 iterations is enough) but the GAN seems to take its time to learn a proper distribution of faces...

shaoanlu commented 6 years ago

I trained my model for about 15k iterations as well. Batch size is 32. Also, I've tried further training it for 5k iters (i.e., 20k on total) but didn't find noticeable difference regarding output quality. So I assume it is converged and stopped there.

ruah1984 commented 6 years ago

sorry to interrupt again. for TE and SH folder must be picture with video flame and face?? if i want to run in floydhub, what will be the best command ??

shaoanlu commented 6 years ago

We put face images of any size into those folders.

I haven't used floydhub for months, and I barely remember how to use it. But I believe the jupyter notebook will work since training only requires packages like OpenCV, numpy and keras.

ruah1984 commented 6 years ago

may i know what is the issue, if i found this in jupyter notebook??

AssertionErrorTraceback (most recent call last)

in () 3 train_B = load_data(img_dirB) 4 ----> 5 assert len(train_A), "No image found in " + str(img_dirA) 6 assert len(train_B), "No image found in " + str(img_dirB) AssertionError: No image found in ./sh/*.* i have load sh and te folder in floydhub ( using the previous git hub version which is sh for source and te for target). but here mention "No image found. /sh/*.*" here is the command i use in floydhub floyd run --gpu --mode jupyter --env tensorflow-1.4:py2 --sh:/sh --data te:/te
shaoanlu commented 6 years ago

It means no image found in sh folder.

If the sh and te folders are from your previous floyd experiment, and you are going to load them in a new experiment. These folders are probably be located at /input/sh and /input/te if I remember correctly (don't know if floyd API had changed).