Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Combining Automatic Speaker Verification and Prosody Analysis for
Synthetic Speech Detection
summary: The rapid spread of media content synthesis technology and the potentially
damaging impact of audio and video deepfakes on people's lives have raised the
need to implement systems able to detect these forgeries automatically. In this
work we present a novel approach for synthetic speech detection, exploiting the
combination of two high-level semantic properties of the human voice. On one
side, we focus on speaker identity cues and represent them as speaker
embeddings extracted using a state-of-the-art method for the automatic speaker
verification task. On the other side, voice prosody, intended as variations in
rhythm, pitch or accent in speech, is extracted through a specialized encoder.
We show that the combination of these two embeddings fed to a supervised binary
classifier allows the detection of deepfake speech generated with both
Text-to-Speech and Voice Conversion techniques. Our results show improvements
over the considered baselines, good generalization properties over multiple
datasets and robustness to audio compression.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Combining Automatic Speaker Verification and Prosody Analysis for Synthetic Speech Detection
summary: The rapid spread of media content synthesis technology and the potentially damaging impact of audio and video deepfakes on people's lives have raised the need to implement systems able to detect these forgeries automatically. In this work we present a novel approach for synthetic speech detection, exploiting the combination of two high-level semantic properties of the human voice. On one side, we focus on speaker identity cues and represent them as speaker embeddings extracted using a state-of-the-art method for the automatic speaker verification task. On the other side, voice prosody, intended as variations in rhythm, pitch or accent in speech, is extracted through a specialized encoder. We show that the combination of these two embeddings fed to a supervised binary classifier allows the detection of deepfake speech generated with both Text-to-Speech and Voice Conversion techniques. Our results show improvements over the considered baselines, good generalization properties over multiple datasets and robustness to audio compression.
id: http://arxiv.org/abs/2210.17222v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.