I have tested the scripts in the following environment.
Make sure your dataset structure is like following structure.
Dataset-AntiDF
├─Fake
└─Real
Combine these datasets to Dataset-AntiDF
DFFD: Diverse Fake Face Dataset (Contains most of the pictures of the following data sets)
Hao Dang, A. (2020). On the Detection of Digital Face Manipulation. In In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2020).
Large-scale CelebFaces Attributes (CelebA) Dataset
Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep Learning Face Attributes in the Wild. In Proceedings of International Conference on Computer Vision (ICCV).
FaceForensics++
Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, & Matthias Nie\ssner (2019). FaceForensics++: Learning to Detect Manipulated Facial Images. In ICCV 2019.
PGGN
Tero Karras, Timo Aila, Samuli Laine, & Jaakko Lehtinen (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation. CoRR.
StarGAN
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, & Jaegul Choo (2018). StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
StyleGAN
Tero Karras, Samuli Laine, & Timo Aila (2018). A Style-Based Generator Architecture for Generative Adversarial Networks. CoRR.
Create a text file where all the class names are listed line by line. This can be easily done with the below command.
ls Dataset-AntiDF > classes.txt
<dataset_root>
: Path to the directory where all the training data stored. required<classes>
: Path to a txt file where all the class names are listed line by line. required<result_root>
: Path to the directory where all the result data will be saved. required[epochs_pre]
: The number of epochs during the first training stage (default: 5).[epochs_fine]
: The number of epochs during the second training stage (default: 50).[batch_size_pre]
: Batch size during the first training stage (default: 32).[batch_size_fine]
: Batch size during the second training stage (default: 16).[lr_pre]
: Learning rate during the first training stage (default:1e-3).[lr_fine]
: Learning rate during the second training stage (default:1e-4).[snapshot_period_pre]
: Snapshot period during the first training stage (default:1). At the every spedified epochs, a serialized model file will be saved under [snapshot_period_fine]
: Snapshot period during the second training stage (default:1).python fine_tune.py <dataset_root> <classes> <result_root> [epochs_pre] [epochs_fine] [batch_size_pre] [batch_size_fine] [lr_pre] [lr_fine] [snapshot_period_pre] [snapshot_period_fine]
python fine_tune.py D:\Dataset-AntiDF classes.txt result-balanced-4w_5_50_180_16_1e-3_1e-4_2/ --epochs_pre 5 --epochs_fine 50 --batch_size_pre 180 --batch_size_fine 16 --lr_pre 1e-3 --lr_fine 1e-4
fine_tune.py
result/
enter command below in terminal
tensorboard --logdir=logs/fit/20210313
<model>
: Path to a serialized model file. required<classes>
: Path to a txt file where all the class names are listed line by line. required<image>
: Path to an image file that you would like to classify. requiredpython inference.py <model> <classes> <image>
python inference.py result-balanced-4w_5_50_180_16_1e-3_1e-4_1/model_fine_final.h5 classes.txt images/faceapp/F_FAP1_00334-2.png
......
2021-03-08 22:50:04.006047: I tensorflow/stream_executor/cuda/cuda_blas.cc:1838] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
Top 1 ====================
Class name: Real
Probability: 100.00%S
Top 2 ====================
Class name: Fake
Probability: 0.00%
based on Flask 1.1.2
run flask-inference.py file, browser the website shown in terminal.