Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Adversarial Contrastive Predictive Coding for Unsupervised Learning of
Disentangled Representations
summary: In this work we tackle disentanglement of speaker and content related
variations in speech signals. We propose a fully convolutional variational
autoencoder employing two encoders: a content encoder and a speaker encoder. To
foster disentanglement we propose adversarial contrastive predictive coding.
This new disentanglement method does neither need parallel data nor any
supervision, not even speaker labels. With successful disentanglement the model
is able to perform voice conversion by recombining content and speaker
attributes. Due to the speaker encoder which learns to extract speaker traits
from an audio signal, the proposed model not only provides meaningful speaker
embeddings but is also able to perform zero-shot voice conversion, i.e. with
previously unseen source and target speakers. Compared to state-of-the-art
disentanglement approaches we show competitive disentanglement and voice
conversion performance for speakers seen during training and superior
performance for unseen speakers.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Adversarial Contrastive Predictive Coding for Unsupervised Learning of Disentangled Representations
summary: In this work we tackle disentanglement of speaker and content related variations in speech signals. We propose a fully convolutional variational autoencoder employing two encoders: a content encoder and a speaker encoder. To foster disentanglement we propose adversarial contrastive predictive coding. This new disentanglement method does neither need parallel data nor any supervision, not even speaker labels. With successful disentanglement the model is able to perform voice conversion by recombining content and speaker attributes. Due to the speaker encoder which learns to extract speaker traits from an audio signal, the proposed model not only provides meaningful speaker embeddings but is also able to perform zero-shot voice conversion, i.e. with previously unseen source and target speakers. Compared to state-of-the-art disentanglement approaches we show competitive disentanglement and voice conversion performance for speakers seen during training and superior performance for unseen speakers.
id: http://arxiv.org/abs/2005.12963v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.