Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Conditional Deep Hierarchical Variational Autoencoder for Voice
Conversion
summary: Variational autoencoder-based voice conversion (VAE-VC) has the advantage of
requiring only pairs of speeches and speaker labels for training. Unlike the
majority of the research in VAE-VC which focuses on utilizing auxiliary losses
or discretizing latent variables, this paper investigates how an increasing
model expressiveness has benefits and impacts on the VAE-VC. Specifically, we
first analyze VAE-VC from a rate-distortion perspective, and point out that
model expressiveness is significant for VAE-VC because rate and distortion
reflect similarity and naturalness of converted speeches. Based on the
analysis, we propose a novel VC method using a deep hierarchical VAE, which has
high model expressiveness as well as having fast conversion speed thanks to its
non-autoregressive decoder. Also, our analysis reveals another problem that
similarity can be degraded when the latent variable of VAEs has redundant
information. We address the problem by controlling the information contained in
the latent variable using $\beta$-VAE objective. In the experiment using VCTK
corpus, the proposed method achieved mean opinion scores higher than 3.5 on
both naturalness and similarity in inter-gender settings, which are higher than
the scores of existing autoencoder-based VC methods.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Conditional Deep Hierarchical Variational Autoencoder for Voice Conversion
summary: Variational autoencoder-based voice conversion (VAE-VC) has the advantage of requiring only pairs of speeches and speaker labels for training. Unlike the majority of the research in VAE-VC which focuses on utilizing auxiliary losses or discretizing latent variables, this paper investigates how an increasing model expressiveness has benefits and impacts on the VAE-VC. Specifically, we first analyze VAE-VC from a rate-distortion perspective, and point out that model expressiveness is significant for VAE-VC because rate and distortion reflect similarity and naturalness of converted speeches. Based on the analysis, we propose a novel VC method using a deep hierarchical VAE, which has high model expressiveness as well as having fast conversion speed thanks to its non-autoregressive decoder. Also, our analysis reveals another problem that similarity can be degraded when the latent variable of VAEs has redundant information. We address the problem by controlling the information contained in the latent variable using $\beta$-VAE objective. In the experiment using VCTK corpus, the proposed method achieved mean opinion scores higher than 3.5 on both naturalness and similarity in inter-gender settings, which are higher than the scores of existing autoencoder-based VC methods.
id: http://arxiv.org/abs/2112.02796v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.