Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Pseudo-Siamese Network based Timbre-reserved Black-box Adversarial Attack in Speaker Identification
summary: In this study, we propose a timbre-reserved adversarial attack approach for
speaker identification (SID) to not only exploit the weakness of the SID model
but also preserve the timbre of the target speaker in a black-box attack
setting. Particularly, we generate timbre-reserved fake audio by adding an
adversarial constraint during the training of the voice conversion model. Then,
we leverage a pseudo-Siamese network architecture to learn from the black-box
SID model constraining both intrinsic similarity and structural similarity
simultaneously. The intrinsic similarity loss is to learn an intrinsic
invariance, while the structural similarity loss is to ensure that the
substitute SID model shares a similar decision boundary to the fixed black-box
SID model. The substitute model can be used as a proxy to generate
timbre-reserved fake audio for attacking. Experimental results on the Audio
Deepfake Detection (ADD) challenge dataset indicate that the attack success
rate of our proposed approach yields up to 60.58% and 55.38% in the white-box
and black-box scenarios, respectively, and can deceive both human beings and
machines.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Pseudo-Siamese Network based Timbre-reserved Black-box Adversarial Attack in Speaker Identification
summary: In this study, we propose a timbre-reserved adversarial attack approach for speaker identification (SID) to not only exploit the weakness of the SID model but also preserve the timbre of the target speaker in a black-box attack setting. Particularly, we generate timbre-reserved fake audio by adding an adversarial constraint during the training of the voice conversion model. Then, we leverage a pseudo-Siamese network architecture to learn from the black-box SID model constraining both intrinsic similarity and structural similarity simultaneously. The intrinsic similarity loss is to learn an intrinsic invariance, while the structural similarity loss is to ensure that the substitute SID model shares a similar decision boundary to the fixed black-box SID model. The substitute model can be used as a proxy to generate timbre-reserved fake audio for attacking. Experimental results on the Audio Deepfake Detection (ADD) challenge dataset indicate that the attack success rate of our proposed approach yields up to 60.58% and 55.38% in the white-box and black-box scenarios, respectively, and can deceive both human beings and machines.
id: http://arxiv.org/abs/2305.19020v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.