Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Scalable Factorized Hierarchical Variational Autoencoder Training
summary: Deep generative models have achieved great success in unsupervised learning
with the ability to capture complex nonlinear relationships between latent
generating factors and observations. Among them, a factorized hierarchical
variational autoencoder (FHVAE) is a variational inference-based model that
formulates a hierarchical generative process for sequential data. Specifically,
an FHVAE model can learn disentangled and interpretable representations, which
have been proven useful for numerous speech applications, such as speaker
verification, robust speech recognition, and voice conversion. However, as we
will elaborate in this paper, the training algorithm proposed in the original
paper is not scalable to datasets of thousands of hours, which makes this model
less applicable on a larger scale. After identifying limitations in terms of
runtime, memory, and hyperparameter optimization, we propose a hierarchical
sampling training algorithm to address all three issues. Our proposed method is
evaluated comprehensively on a wide variety of datasets, ranging from 3 to
1,000 hours and involving different types of generating factors, such as
recording conditions and noise types. In addition, we also present a new
visualization method for qualitatively evaluating the performance with respect
to the interpretability and disentanglement. Models trained with our proposed
algorithm demonstrate the desired characteristics on all the datasets.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Scalable Factorized Hierarchical Variational Autoencoder Training
summary: Deep generative models have achieved great success in unsupervised learning with the ability to capture complex nonlinear relationships between latent generating factors and observations. Among them, a factorized hierarchical variational autoencoder (FHVAE) is a variational inference-based model that formulates a hierarchical generative process for sequential data. Specifically, an FHVAE model can learn disentangled and interpretable representations, which have been proven useful for numerous speech applications, such as speaker verification, robust speech recognition, and voice conversion. However, as we will elaborate in this paper, the training algorithm proposed in the original paper is not scalable to datasets of thousands of hours, which makes this model less applicable on a larger scale. After identifying limitations in terms of runtime, memory, and hyperparameter optimization, we propose a hierarchical sampling training algorithm to address all three issues. Our proposed method is evaluated comprehensively on a wide variety of datasets, ranging from 3 to 1,000 hours and involving different types of generating factors, such as recording conditions and noise types. In addition, we also present a new visualization method for qualitatively evaluating the performance with respect to the interpretability and disentanglement. Models trained with our proposed algorithm demonstrate the desired characteristics on all the datasets.
id: http://arxiv.org/abs/1804.03201v2
judge
Write 'confirmed' or 'excluded' in [] as comment.