Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Self-Attention and Hybrid Features for Replay and Deep-Fake Audio Detection
summary: Due to the successful application of deep learning, audio spoofing detection
has made significant progress. Spoofed audio with speech synthesis or voice
conversion can be well detected by many countermeasures. However, an automatic
speaker verification system is still vulnerable to spoofing attacks such as
replay or Deep-Fake audio. Deep-Fake audio means that the spoofed utterances
are generated using text-to-speech (TTS) and voice conversion (VC) algorithms.
Here, we propose a novel framework based on hybrid features with the
self-attention mechanism. It is expected that hybrid features can be used to
get more discrimination capacity. Firstly, instead of only one type of
conventional feature, deep learning features and Mel-spectrogram features will
be extracted by two parallel paths: convolution neural networks and a
short-time Fourier transform (STFT) followed by Mel-frequency. Secondly,
features will be concatenated by a max-pooling layer. Thirdly, there is a
Self-attention mechanism for focusing on essential elements. Finally, ResNet
and a linear layer are built to get the results. Experimental results reveal
that the hybrid features, compared with conventional features, can cover more
details of an utterance. We achieve the best Equal Error Rate (EER) of 9.67\%
in the physical access (PA) scenario and 8.94\% in the Deep fake task on the
ASVspoof 2021 dataset. Compared with the best baseline system, the proposed
approach improves by 74.60\% and 60.05\%, respectively.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Self-Attention and Hybrid Features for Replay and Deep-Fake Audio Detection
summary: Due to the successful application of deep learning, audio spoofing detection has made significant progress. Spoofed audio with speech synthesis or voice conversion can be well detected by many countermeasures. However, an automatic speaker verification system is still vulnerable to spoofing attacks such as replay or Deep-Fake audio. Deep-Fake audio means that the spoofed utterances are generated using text-to-speech (TTS) and voice conversion (VC) algorithms. Here, we propose a novel framework based on hybrid features with the self-attention mechanism. It is expected that hybrid features can be used to get more discrimination capacity. Firstly, instead of only one type of conventional feature, deep learning features and Mel-spectrogram features will be extracted by two parallel paths: convolution neural networks and a short-time Fourier transform (STFT) followed by Mel-frequency. Secondly, features will be concatenated by a max-pooling layer. Thirdly, there is a Self-attention mechanism for focusing on essential elements. Finally, ResNet and a linear layer are built to get the results. Experimental results reveal that the hybrid features, compared with conventional features, can cover more details of an utterance. We achieve the best Equal Error Rate (EER) of 9.67\% in the physical access (PA) scenario and 8.94\% in the Deep fake task on the ASVspoof 2021 dataset. Compared with the best baseline system, the proposed approach improves by 74.60\% and 60.05\%, respectively.
id: http://arxiv.org/abs/2401.05614v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.