Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: AUTOVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
summary: Non-parallel many-to-many voice conversion, as well as zero-shot voice
conversion, remain under-explored areas. Deep style transfer algorithms, such
as generative adversarial networks (GAN) and conditional variational
autoencoder (CVAE), are being applied as new solutions in this field. However,
GAN training is sophisticated and difficult, and there is no strong evidence
that its generated speech is of good perceptual quality. On the other hand,
CVAE training is simple but does not come with the distribution-matching
property of a GAN. In this paper, we propose a new style transfer scheme that
involves only an autoencoder with a carefully designed bottleneck. We formally
show that this scheme can achieve distribution-matching style transfer by
training only on a self-reconstruction loss. Based on this scheme, we proposed
AUTOVC, which achieves state-of-the-art results in many-to-many voice
conversion with non-parallel data, and which is the first to perform zero-shot
voice conversion.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: AUTOVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
summary: Non-parallel many-to-many voice conversion, as well as zero-shot voice conversion, remain under-explored areas. Deep style transfer algorithms, such as generative adversarial networks (GAN) and conditional variational autoencoder (CVAE), are being applied as new solutions in this field. However, GAN training is sophisticated and difficult, and there is no strong evidence that its generated speech is of good perceptual quality. On the other hand, CVAE training is simple but does not come with the distribution-matching property of a GAN. In this paper, we propose a new style transfer scheme that involves only an autoencoder with a carefully designed bottleneck. We formally show that this scheme can achieve distribution-matching style transfer by training only on a self-reconstruction loss. Based on this scheme, we proposed AUTOVC, which achieves state-of-the-art results in many-to-many voice conversion with non-parallel data, and which is the first to perform zero-shot voice conversion.
id: http://arxiv.org/abs/1905.05879v2
judge
Write 'confirmed' or 'excluded' in [] as comment.