Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Defending Your Voice: Adversarial Attack on Voice Conversion
summary: Substantial improvements have been achieved in recent years in voice
conversion, which converts the speaker characteristics of an utterance into
those of another speaker without changing the linguistic content of the
utterance. Nonetheless, the improved conversion technologies also led to
concerns about privacy and authentication. It thus becomes highly desired to be
able to prevent one's voice from being improperly utilized with such voice
conversion technologies. This is why we report in this paper the first known
attempt to try to perform adversarial attack on voice conversion. We introduce
human imperceptible noise into the utterances of a speaker whose voice is to be
defended. Given these adversarial examples, voice conversion models cannot
convert other utterances so as to sound like being produced by the defended
speaker. Preliminary experiments were conducted on two currently
state-of-the-art zero-shot voice conversion models. Objective and subjective
evaluation results in both white-box and black-box scenarios are reported. It
was shown that the speaker characteristics of the converted utterances were
made obviously different from those of the defended speaker, while the
adversarial examples of the defended speaker are not distinguishable from the
authentic utterances.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Defending Your Voice: Adversarial Attack on Voice Conversion
summary: Substantial improvements have been achieved in recent years in voice conversion, which converts the speaker characteristics of an utterance into those of another speaker without changing the linguistic content of the utterance. Nonetheless, the improved conversion technologies also led to concerns about privacy and authentication. It thus becomes highly desired to be able to prevent one's voice from being improperly utilized with such voice conversion technologies. This is why we report in this paper the first known attempt to try to perform adversarial attack on voice conversion. We introduce human imperceptible noise into the utterances of a speaker whose voice is to be defended. Given these adversarial examples, voice conversion models cannot convert other utterances so as to sound like being produced by the defended speaker. Preliminary experiments were conducted on two currently state-of-the-art zero-shot voice conversion models. Objective and subjective evaluation results in both white-box and black-box scenarios are reported. It was shown that the speaker characteristics of the converted utterances were made obviously different from those of the defended speaker, while the adversarial examples of the defended speaker are not distinguishable from the authentic utterances.
id: http://arxiv.org/abs/2005.08781v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.