Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Adversarial speech for voice privacy protection from Personalized Speech generation
summary: The rapid progress in personalized speech generation technology, including
personalized text-to-speech (TTS) and voice conversion (VC), poses a challenge
in distinguishing between generated and real speech for human listeners,
resulting in an urgent demand in protecting speakers' voices from malicious
misuse. In this regard, we propose a speaker protection method based on
adversarial attacks. The proposed method perturbs speech signals by minimally
altering the original speech while rendering downstream speech generation
models unable to accurately generate the voice of the target speaker. For
validation, we employ the open-source pre-trained YourTTS model for speech
generation and protect the target speaker's speech in the white-box scenario.
Automatic speaker verification (ASV) evaluations were carried out on the
generated speech as the assessment of the voice protection capability. Our
experimental results show that we successfully perturbed the speaker encoder of
the YourTTS model using the gradient-based I-FGSM adversarial perturbation
method. Furthermore, the adversarial perturbation is effective in preventing
the YourTTS model from generating the speech of the target speaker. Audio
samples can be found in
https://voiceprivacy.github.io/Adeversarial-Speech-with-YourTTS.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Adversarial speech for voice privacy protection from Personalized Speech generation
summary: The rapid progress in personalized speech generation technology, including personalized text-to-speech (TTS) and voice conversion (VC), poses a challenge in distinguishing between generated and real speech for human listeners, resulting in an urgent demand in protecting speakers' voices from malicious misuse. In this regard, we propose a speaker protection method based on adversarial attacks. The proposed method perturbs speech signals by minimally altering the original speech while rendering downstream speech generation models unable to accurately generate the voice of the target speaker. For validation, we employ the open-source pre-trained YourTTS model for speech generation and protect the target speaker's speech in the white-box scenario. Automatic speaker verification (ASV) evaluations were carried out on the generated speech as the assessment of the voice protection capability. Our experimental results show that we successfully perturbed the speaker encoder of the YourTTS model using the gradient-based I-FGSM adversarial perturbation method. Furthermore, the adversarial perturbation is effective in preventing the YourTTS model from generating the speech of the target speaker. Audio samples can be found in https://voiceprivacy.github.io/Adeversarial-Speech-with-YourTTS.
id: http://arxiv.org/abs/2401.11857v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.