Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised
Speech Representation Disentanglement for One-shot Voice Conversion
summary: One-shot voice conversion (VC), which performs conversion across arbitrary
speakers with only a single target-speaker utterance for reference, can be
effectively achieved by speech representation disentanglement. Existing work
generally ignores the correlation between different speech representations
during training, which causes leakage of content information into the speaker
representation and thus degrades VC performance. To alleviate this issue, we
employ vector quantization (VQ) for content encoding and introduce mutual
information (MI) as the correlation metric during training, to achieve proper
disentanglement of content, speaker and pitch representations, by reducing
their inter-dependencies in an unsupervised manner. Experimental results
reflect the superiority of the proposed method in learning effective
disentangled speech representations for retaining source linguistic content and
intonation variations, while capturing target speaker characteristics. In doing
so, the proposed approach achieves higher speech naturalness and speaker
similarity than current state-of-the-art one-shot VC systems. Our code,
pre-trained models and demo are available at
https://github.com/Wendison/VQMIVC.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: VQMIVC: Vector Quantization and Mutual Information-Based Unsupervised Speech Representation Disentanglement for One-shot Voice Conversion
summary: One-shot voice conversion (VC), which performs conversion across arbitrary speakers with only a single target-speaker utterance for reference, can be effectively achieved by speech representation disentanglement. Existing work generally ignores the correlation between different speech representations during training, which causes leakage of content information into the speaker representation and thus degrades VC performance. To alleviate this issue, we employ vector quantization (VQ) for content encoding and introduce mutual information (MI) as the correlation metric during training, to achieve proper disentanglement of content, speaker and pitch representations, by reducing their inter-dependencies in an unsupervised manner. Experimental results reflect the superiority of the proposed method in learning effective disentangled speech representations for retaining source linguistic content and intonation variations, while capturing target speaker characteristics. In doing so, the proposed approach achieves higher speech naturalness and speaker similarity than current state-of-the-art one-shot VC systems. Our code, pre-trained models and demo are available at https://github.com/Wendison/VQMIVC.
id: http://arxiv.org/abs/2106.10132v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.