Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Toward Improving Synthetic Audio Spoofing Detection Robustness via Meta-Learning and Disentangled Training With Adversarial Examples
summary: Advances in automatic speaker verification (ASV) promote research into the
formulation of spoofing detection systems for real-world applications. The
performance of ASV systems can be degraded severely by multiple types of
spoofing attacks, namely, synthetic speech (SS), voice conversion (VC), replay,
twins and impersonation, especially in the case of unseen synthetic spoofing
attacks. A reliable and robust spoofing detection system can act as a security
gate to filter out spoofing attacks instead of having them reach the ASV
system. A weighted additive angular margin loss is proposed to address the data
imbalance issue, and different margins has been assigned to improve
generalization to unseen spoofing attacks in this study. Meanwhile, we
incorporate a meta-learning loss function to optimize differences between the
embeddings of support versus query set in order to learn a
spoofing-category-independent embedding space for utterances. Furthermore, we
craft adversarial examples by adding imperceptible perturbations to spoofing
speech as a data augmentation strategy, then we use an auxiliary batch
normalization (BN) to guarantee that corresponding normalization statistics are
performed exclusively on the adversarial examples. Additionally, A simple
attention module is integrated into the residual block to refine the feature
extraction process. Evaluation results on the Logical Access (LA) track of the
ASVspoof 2019 corpus provides confirmation of our proposed approaches'
effectiveness in terms of a pooled EER of 0.87%, and a min t-DCF of 0.0277.
These advancements offer effective options to reduce the impact of spoofing
attacks on voice recognition/authentication systems.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Toward Improving Synthetic Audio Spoofing Detection Robustness via Meta-Learning and Disentangled Training With Adversarial Examples
summary: Advances in automatic speaker verification (ASV) promote research into the formulation of spoofing detection systems for real-world applications. The performance of ASV systems can be degraded severely by multiple types of spoofing attacks, namely, synthetic speech (SS), voice conversion (VC), replay, twins and impersonation, especially in the case of unseen synthetic spoofing attacks. A reliable and robust spoofing detection system can act as a security gate to filter out spoofing attacks instead of having them reach the ASV system. A weighted additive angular margin loss is proposed to address the data imbalance issue, and different margins has been assigned to improve generalization to unseen spoofing attacks in this study. Meanwhile, we incorporate a meta-learning loss function to optimize differences between the embeddings of support versus query set in order to learn a spoofing-category-independent embedding space for utterances. Furthermore, we craft adversarial examples by adding imperceptible perturbations to spoofing speech as a data augmentation strategy, then we use an auxiliary batch normalization (BN) to guarantee that corresponding normalization statistics are performed exclusively on the adversarial examples. Additionally, A simple attention module is integrated into the residual block to refine the feature extraction process. Evaluation results on the Logical Access (LA) track of the ASVspoof 2019 corpus provides confirmation of our proposed approaches' effectiveness in terms of a pooled EER of 0.87%, and a min t-DCF of 0.0277. These advancements offer effective options to reduce the impact of spoofing attacks on voice recognition/authentication systems.
id: http://arxiv.org/abs/2408.13341v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.