Paper: Aghasanli, Agil, Dmitry Kangin, and Plamen Angelov. "Interpretable-through-prototypes deepfake detection for diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. https://openaccess.thecvf.com/content/ICCV2023W/DFAD/papers/Aghasanli_Interpretable-Through-Prototypes_Deepfake_Detection_for_Diffusion_Models_ICCVW_2023_paper.pdf
Ensure you have downloaded the project datasets before proceeding with the execution of any script.
Feature_extraction_pretrained.py
file.data_dir
variable in line 72 to the appropriate path on your local machine where the datasets are stored.python Feature_extraction_pretrained.py
This script will generate four CSV files containing the train/test features and labels.
Feature_extraction_finetuned.py
file.data_dir
variable in line 66 to match the path on your local machine.python Feature_extraction_finetuned.py
Similar to the pretrained model, this will generate four distinct CSV files for train/test features and labels.
After obtaining the necessary CSV files:
xDNN_run.py
to import the correct CSV files.xDNN_run.py
script to test the xDNN classifier:
python xDNN_run.py
This script also generates a data file containing prototypes for later use (e.g., explainability).
To see the results using SVM, KNN, and Naive Bayes:
test_classifiers.py
to be compatible with the names of the generated CSV files.python test_classifiers.py
Run the Jupyter Notebook finetune.ipynb
to perform finetuning on the ViT model using the Deepfake FFHQ dataset (or possibly another new dataset).
Follow these instructions to ensure the correct setup and execution of scripts within the project.