The goal of this project is to advance our research on disrupting deepfakes to a level where it can be presented at CVPR 2025. This project aims to develop effective techniques that can interfere with the creation or dissemination of deepfake content, rendering it unusable or easily identifiable as manipulated.
Deepfake-Shield is a system designed to counter deepfake technology by introducing methods that actively disrupt the deepfake generation process or degrade the quality of deepfakes to make them less convincing. By leveraging adversarial techniques, noise injection, and data manipulation, this project provides a proactive approach to mitigating the risks posed by deepfake technology.
Deepfakes pose a significant threat by generating realistic but fake media that can deceive viewers and systems. Instead of merely detecting deepfakes after they are created, Deepfake-Shield takes a more proactive approach by disrupting the deepfake generation process. This disruption makes it challenging for attackers to create convincing deepfakes, thereby reducing the potential harm.
@inproceedings{Citation Key,
title={Deepfake-Shield},
author={Yeong-Min Ko},
booktitle={to be added.},
year={2025}
}
git clone https://github.com/yourusername/Deepfake-Shield.git
pip install -r requirements.txt