karanvivekbhargava / obamanet

ObamaNet : Photo-realistic lip-sync from audio (Unofficial port)
MIT License
237 stars 71 forks source link

ObamaNet : Lip Sync from Audio

NMPC NMPC NMPC NMPC NMPC

List of Contents

Requirements

You may install the requirements by running the following command

sudo pip3 install -r requirements.txt

The project is built for python 3.5 and above. The other libraries are listed below

For a complete list refer to requirements.txt file.

I used the tools below to extract and manipulate the data:

Data Extraction


I extracted the data from youtube using youtube-dl. It's perhaps the best downloader for youtube on linux. Commands for extracting particular streams are given below.

To convert from BMP format to JPG format, use the following in the directory

mogrify -format jpg *.bmp
rm -rf *.bmp

Copy the patched images into folder a and the cropped images to folder b

python3 tools/process.py --input_dir a --b_dir b --operation combine --output_dir c
python3 tools/split.py --dir c

You may use this pretrained model or train pix2pix from scratch using this dataset. Unzip the dataset into the pix2pix main directory.

python3 pix2pix.py --mode train --output_dir output --max_epochs 200 --input_dir c/train/ --which_direction AtoB

To run the pix2pix trained model

python3 pix2pix.py --mode test --output_dir test_out/ --input_dir c_test/ --checkpoint output/

To convert images to video

ffmpeg -r 30 -f image2 -s 256x256 -i %d-targets.png -vcodec libx264 -crf 25 ../targets.mp4

Pretrained Model

Link to the pretrained model and a subset of the data is here - Link

Download and extract the checkpoints and the data folders into the repository. The file structure should look as shown below.

obamanet
|
└─ data
|   | audios
|   | a2key_data
|   ...
|
└─ checkpoints
|   | output
|   | model.h5
|   ...
└─ train.py
└─ run.py
└─ run.sh
...

Running sample wav file

Run the following commands

bash run.sh <relative_path_to_audio_wav_file>

Example:

bash run.sh data/audios/karan.wav

Feel free to experiment with different voices. However, the result will depend on how close your voice is to the subject we trained on.

Citation


If you use this code for your research, please cite the paper this code is based on: ObamaNet: Photo-realistic lip-sync from text and also the amazing repository of pix2pix by affinelayer.

Cite as arXiv:1801.01442v1 [cs.CV]

FAQs