dot (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. dot is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.
If you want to learn more about dot is used for penetration tests with deepfakes in the industry, read these articles by The Verge and Biometric Update.
dot is developed for research and demonstration purposes. As an end user, you have the responsibility to obey all applicable laws when using this program. Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.
In a nutshell, dot works like this
flowchart LR;
A(your webcam feed) --> B(suite of realtime deepfakes);
B(suite of realtime deepfakes) --> C(virtual camera injection);
All deepfakes supported by dot do not require additional training. They can be used in real-time on the fly on a photo that becomes the target of face impersonation. Supported methods:
224
and 512
256
and 512
Download and run the dot executable for your OS:
dot.zip
from here, unzip it and then run dot.exe
Mac (Tested on Apple M2 Sonoma 14.0):
dot-m2.zip
from here and unzip itxattr -cr dot-executable.app
to remove any extended attributesShow Package Contents
dot-executable
from Contents/MacOS
folderUsage example:
source
.target
. In most cases, 0
is the correct camera id.config_file
. Select a default configuration from the dropdown list or use a custom file.use_gpu
to use the GPU.RUN
button to start the deepfake.For more information about each field, click on the menu Help/Usage
.
Watch the following demo video for better understanding of the interface
Linux
sudo apt install ffmpeg cmake
MacOS
brew install ffmpeg cmake
Windows
The instructions assumes that you have Miniconda installed on your machine. If you don't, you can refer to this link for installation instructions.
conda env create -f envs/environment-gpu.yaml
conda activate dot
Install the torch
and torchvision
dependencies based on the CUDA version installed on your machine:
cudatoolkit
from conda
: conda install cudatoolkit=<cuda_version_no>
(replace <cuda_version_no>
with the version on your machine)Install torch
and torchvision
dependencies: pip install torch==2.0.1+<cuda_tag> torchvision==0.15.2+<cuda_tag> torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
, where <cuda_tag>
is the CUDA tag defined by Pytorch. For example, pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
for CUDA 11.8.
Note: torch1.9.0+cu111
can also be used.
To check that torch
and torchvision
are installed correctly, run the following command: python -c "import torch; print(torch.cuda.is_available())"
. If the output is True
, the dependencies are installed with CUDA support.
conda env create -f envs/environment-apple-m2.yaml
conda activate dot
To check that torch
and torchvision
are installed correctly, run the following command: python -c "import torch; print(torch.backends.mps.is_available())"
. If the output is True
, the dependencies are installed with Metal programming framework support.
conda env create -f envs/environment-cpu.yaml
conda activate dot
pip install -e .
Run dot --help
to get a full list of available options.
Simswap
dot -c ./configs/simswap.yaml --target 0 --source "./data" --use_gpu
SimSwapHQ
dot -c ./configs/simswaphq.yaml --target 0 --source "./data" --use_gpu
FOMM
dot -c ./configs/fomm.yaml --target 0 --source "./data" --use_gpu
FaceSwap CV2
dot -c ./configs/faceswap_cv2.yaml --target 0 --source "./data" --use_gpu
Note: To enable face superresolution, use the flag --gpen_type gpen_256
or --gpen_type gpen_512
. To use dot on CPU (not recommended), do not pass the --use_gpu
flag.
Disclaimer: We use the
SimSwap
technique for the following demonstration
Running dot via any of the above methods generates real-time Deepfake on the input video feed using source images from the data/
folder.
When running dot a list of available control options appear on the terminal window as shown above. You can toggle through and select different source images by pressing the associated control key.
Watch the following demo video for better understanding of the control options:
Build the container
docker-compose up --build -d
Access the container
docker-compose exec dot "/bin/bash"
Build the container
docker build -t dot -f Dockerfile .
Run the container
xhost +
docker run -ti --gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
-e NVIDIA_VISIBLE_DEVICES=all \
-e PYTHONUNBUFFERED=1 \
-e DISPLAY \
-v .:/dot \
-v /tmp/.X11-unix:/tmp/.X11-unix:rw \
--runtime nvidia \
--entrypoint /bin/bash \
-p 8080:8080 \
--device=/dev/video0:/dev/video0 \
dot
Follow the instructions here under Windows to set up the webcam with docker.
Build the container
docker build -t dot -f Dockerfile .
Run the container
docker run -ti --gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
-e NVIDIA_VISIBLE_DEVICES=all \
-e PYTHONUNBUFFERED=1 \
-e DISPLAY=192.168.99.1:0 \
-v .:/dot \
--runtime nvidia \
--entrypoint /bin/bash \
-p 8080:8080 \
--device=/dev/video0:/dev/video0 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
dot
Follow the instructions here to set up the webcam with docker.
Build the container
docker build -t dot -f Dockerfile .
Run the container
docker run -ti --gpus all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility \
-e NVIDIA_VISIBLE_DEVICES=all \
-e PYTHONUNBUFFERED=1 \
-e DISPLAY=$IP:0 \
-v .:/dot \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--runtime nvidia \
--entrypoint /bin/bash \
-p 8080:8080 \
--device=/dev/video0:/dev/video0 \
dot
Instructions vary depending on your operating system.
Install OBS Studio.
Run OBS Studio.
In the Sources section, press on Add button ("+" sign),
select Windows Capture and press OK. In the appeared window, choose "[python.exe]: fomm" in Window drop-down menu and press OK. Then select Edit -> Transform -> Fit to screen.
In OBS Studio, go to Tools -> VirtualCam. Check AutoStart,
set Buffered Frames to 0 and press Start.
Now OBS-Camera
camera should be available in Zoom
(or other videoconferencing software).
sudo apt update
sudo apt install v4l-utils v4l2loopback-dkms v4l2loopback-utils
sudo modprobe v4l2loopback devices=1 card_label="OBS Cam" exclusive_caps=1
v4l2-ctl --list-devices
sudo add-apt-repository ppa:obsproject/obs-studio
sudo apt install obs-studio
Open OBS Studio
and check if tools --> v4l2sink
exists.
If it doesn't follow these instructions:
mkdir -p ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
ln -s /usr/lib/obs-plugins/v4l2sink.so ~/.config/obs-studio/plugins/v4l2sink/bin/64bit/
Use the virtual camera with OBS Studio
:
OBS Studio
tools --> v4l2sink
/dev/video2
and YUV420
start
OBS Cam
--use_cam
flag to enable camera feedIf you are performing a test against a mobile app, virtual cameras are much harder to inject. An alternative is to use mobile emulators and still resort to virtual camera injection.
Run dot
. Check running dot for more information.
Run OBS Studio
and set up the virtual camera. Check virtual-camera-injection for more information.
Download and Install Genymotion.
Open Genymotion and set up the Android emulator.
Set up dot with the Android emulator:
camera
and select OBS-Camera
as front and back cameras. A preview of the dot window should appear.
In case there is no preview, restart OBS
and the emulator and try again.
If that didn't work, use a different virtual camera software like e2eSoft VCam
or ManyCam
.dot
deepfake output should be now the emulator's phone camera.Tested on a AMD Ryzen 5 2600 Six-Core Processor with one NVIDIA GeForce RTX 2070
Simswap: FPS 13.0
Simswap + gpen 256: FPS 7.0
SimswapHQ: FPS 11.0
FOMM: FPS 31.0
Tested on Macbook Air M2 2022 16GB
Simswap: FPS 3.2
Simswap + gpen 256: FPS 1.8
SimswapHQ: FPS 2.7
FOMM: FPS 2.0
This is not a commercial Sensity product, and it is distributed freely with no warranties
The software is distributed under BSD 3-Clause. dot utilizes several open source libraries. If you use dot, make sure you agree with their licenses too. In particular, this codebase is built on top of the following research projects:
If you have ideas for improving dot, feel free to open relevant Issues and PRs. Please read CONTRIBUTING.md before contributing to the repository.
dot
on pre-recorded image and video filesdot
is very slow and I can't run it in real timeMake sure that you are running it on a GPU card by using the --use_gpu
flag. CPU is not recommended.
If you still find it too slow it may be because you running it on an old GPU model, with less than 8GB of RAM.
dot
only work with a webcam feed or also with a pre-recorded video?You can use dot
on a pre-recorded video file by these scripts or try it directly on Colab.