Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Towards Improved Zero-shot Voice Conversion with Conditional DSVAE
summary: Disentangling content and speaking style information is essential for
zero-shot non-parallel voice conversion (VC). Our previous study investigated a
novel framework with disentangled sequential variational autoencoder (DSVAE) as
the backbone for information decomposition. We have demonstrated that
simultaneous disentangling content embedding and speaker embedding from one
utterance is feasible for zero-shot VC. In this study, we continue the
direction by raising one concern about the prior distribution of content branch
in the DSVAE baseline. We find the random initialized prior distribution will
force the content embedding to reduce the phonetic-structure information during
the learning process, which is not a desired property. Here, we seek to achieve
a better content embedding with more phonetic information preserved. We propose
conditional DSVAE, a new model that enables content bias as a condition to the
prior modeling and reshapes the content embedding sampled from the posterior
distribution. In our experiment on the VCTK dataset, we demonstrate that
content embeddings derived from the conditional DSVAE overcome the randomness
and achieve a much better phoneme classification accuracy, a stabilized
vocalization and a better zero-shot VC performance compared with the
competitive DSVAE baseline.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Towards Improved Zero-shot Voice Conversion with Conditional DSVAE
summary: Disentangling content and speaking style information is essential for zero-shot non-parallel voice conversion (VC). Our previous study investigated a novel framework with disentangled sequential variational autoencoder (DSVAE) as the backbone for information decomposition. We have demonstrated that simultaneous disentangling content embedding and speaker embedding from one utterance is feasible for zero-shot VC. In this study, we continue the direction by raising one concern about the prior distribution of content branch in the DSVAE baseline. We find the random initialized prior distribution will force the content embedding to reduce the phonetic-structure information during the learning process, which is not a desired property. Here, we seek to achieve a better content embedding with more phonetic information preserved. We propose conditional DSVAE, a new model that enables content bias as a condition to the prior modeling and reshapes the content embedding sampled from the posterior distribution. In our experiment on the VCTK dataset, we demonstrate that content embeddings derived from the conditional DSVAE overcome the randomness and achieve a much better phoneme classification accuracy, a stabilized vocalization and a better zero-shot VC performance compared with the competitive DSVAE baseline.
id: http://arxiv.org/abs/2205.05227v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.