Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Multi-Target Emotional Voice Conversion With Neural Vocoders
summary: Emotional voice conversion (EVC) is one way to generate expressive synthetic
speech. Previous approaches mainly focused on modeling one-to-one mapping,
i.e., conversion from one emotional state to another emotional state, with
Mel-cepstral vocoders. In this paper, we investigate building a multi-target
EVC (MTEVC) architecture, which combines a deep bidirectional long-short term
memory (DBLSTM)-based conversion model and a neural vocoder. Phonetic
posteriorgrams (PPGs) containing rich linguistic information are incorporated
into the conversion model as auxiliary input features, which boost the
conversion performance. To leverage the advantages of the newly emerged neural
vocoders, we investigate the conditional WaveNet and flow-based WaveNet
(FloWaveNet) as speech generators. The vocoders take in additional speaker
information and emotion information as auxiliary features and are trained with
a multi-speaker and multi-emotion speech corpus. Objective metrics and
subjective evaluation of the experimental results verify the efficacy of the
proposed MTEVC architecture for EVC.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Multi-Target Emotional Voice Conversion With Neural Vocoders
summary: Emotional voice conversion (EVC) is one way to generate expressive synthetic speech. Previous approaches mainly focused on modeling one-to-one mapping, i.e., conversion from one emotional state to another emotional state, with Mel-cepstral vocoders. In this paper, we investigate building a multi-target EVC (MTEVC) architecture, which combines a deep bidirectional long-short term memory (DBLSTM)-based conversion model and a neural vocoder. Phonetic posteriorgrams (PPGs) containing rich linguistic information are incorporated into the conversion model as auxiliary input features, which boost the conversion performance. To leverage the advantages of the newly emerged neural vocoders, we investigate the conditional WaveNet and flow-based WaveNet (FloWaveNet) as speech generators. The vocoders take in additional speaker information and emotion information as auxiliary features and are trained with a multi-speaker and multi-emotion speech corpus. Objective metrics and subjective evaluation of the experimental results verify the efficacy of the proposed MTEVC architecture for EVC.
id: http://arxiv.org/abs/2004.03782v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.