Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Detection and Evaluation of human and machine generated speech in
spoofing attacks on automatic speaker verification systems
summary: Automatic speaker verification (ASV) systems utilize the biometric
information in human speech to verify the speaker's identity. The techniques
used for performing speaker verification are often vulnerable to malicious
attacks that attempt to induce the ASV system to return wrong results, allowing
an impostor to bypass the system and gain access. Attackers use a multitude of
spoofing techniques for this, such as voice conversion, audio replay, speech
synthesis, etc. In recent years, easily available tools to generate deepfaked
audio have increased the potential threat to ASV systems. In this paper, we
compare the potential of human impersonation (voice disguise) based attacks
with attacks based on machine-generated speech, on black-box and white-box ASV
systems. We also study countermeasures by using features that capture the
unique aspects of human speech production, under the hypothesis that machines
cannot emulate many of the fine-level intricacies of the human speech
production mechanism. We show that fundamental frequency sequence-related
entropy, spectral envelope, and aperiodic parameters are promising candidates
for robust detection of deepfaked speech generated by unknown methods.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Detection and Evaluation of human and machine generated speech in spoofing attacks on automatic speaker verification systems
summary: Automatic speaker verification (ASV) systems utilize the biometric information in human speech to verify the speaker's identity. The techniques used for performing speaker verification are often vulnerable to malicious attacks that attempt to induce the ASV system to return wrong results, allowing an impostor to bypass the system and gain access. Attackers use a multitude of spoofing techniques for this, such as voice conversion, audio replay, speech synthesis, etc. In recent years, easily available tools to generate deepfaked audio have increased the potential threat to ASV systems. In this paper, we compare the potential of human impersonation (voice disguise) based attacks with attacks based on machine-generated speech, on black-box and white-box ASV systems. We also study countermeasures by using features that capture the unique aspects of human speech production, under the hypothesis that machines cannot emulate many of the fine-level intricacies of the human speech production mechanism. We show that fundamental frequency sequence-related entropy, spectral envelope, and aperiodic parameters are promising candidates for robust detection of deepfaked speech generated by unknown methods.
id: http://arxiv.org/abs/2011.03689v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.