Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes
summary: The task of deepfakes detection is far from being solved by speech or vision
researchers. Several publicly available databases of fake synthetic video and
speech were built to aid the development of detection methods. However,
existing databases typically focus on visual or voice modalities and provide no
proof that their deepfakes can in fact impersonate any real person. In this
paper, we present the first realistic audio-visual database of deepfakes
SWAN-DF, where lips and speech are well synchronized and video have high visual
and audio qualities. We took the publicly available SWAN dataset of real videos
with different identities to create audio-visual deepfakes using several models
from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC,
YourTTS, and FreeVC models for voice conversion. From the publicly available
speech dataset LibriTTS, we also created a separate database of only audio
deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS,
Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art
speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to
the synthetic voices. Similarly, we tested face recognition system based on the
MobileFaceNet architecture to several variants of our visual deepfakes. The
vulnerability assessment show that by tuning the existing pretrained deepfake
models to specific identities, one can successfully spoof the face and speaker
recognition systems in more than 90% of the time and achieve a very realistic
looking and sounding fake video of a given person.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Vulnerability of Automatic Identity Recognition to Audio-Visual Deepfakes
summary: The task of deepfakes detection is far from being solved by speech or vision researchers. Several publicly available databases of fake synthetic video and speech were built to aid the development of detection methods. However, existing databases typically focus on visual or voice modalities and provide no proof that their deepfakes can in fact impersonate any real person. In this paper, we present the first realistic audio-visual database of deepfakes SWAN-DF, where lips and speech are well synchronized and video have high visual and audio qualities. We took the publicly available SWAN dataset of real videos with different identities to create audio-visual deepfakes using several models from DeepFaceLab and blending techniques for face swapping and HiFiVC, DiffVC, YourTTS, and FreeVC models for voice conversion. From the publicly available speech dataset LibriTTS, we also created a separate database of only audio deepfakes LibriTTS-DF using several latest text to speech methods: YourTTS, Adaspeech, and TorToiSe. We demonstrate the vulnerability of a state of the art speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain, to the synthetic voices. Similarly, we tested face recognition system based on the MobileFaceNet architecture to several variants of our visual deepfakes. The vulnerability assessment show that by tuning the existing pretrained deepfake models to specific identities, one can successfully spoof the face and speaker recognition systems in more than 90% of the time and achieve a very realistic looking and sounding fake video of a given person.
id: http://arxiv.org/abs/2311.17655v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.