Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Emotional Voice Conversion using Multitask Learning with Text-to-speech
summary: Voice conversion (VC) is a task to transform a person's voice to different
style while conserving linguistic contents. Previous state-of-the-art on VC is
based on sequence-to-sequence (seq2seq) model, which could mislead linguistic
information. There was an attempt to overcome it by using textual supervision,
it requires explicit alignment which loses the benefit of using seq2seq model.
In this paper, a voice converter using multitask learning with text-to-speech
(TTS) is presented. The embedding space of seq2seq-based TTS has abundant
information on the text. The role of the decoder of TTS is to convert embedding
space to speech, which is same to VC. In the proposed model, the whole network
is trained to minimize loss of VC and TTS. VC is expected to capture more
linguistic information and to preserve training stability by multitask
learning. Experiments of VC were performed on a male Korean emotional
text-speech dataset, and it is shown that multitask learning is helpful to keep
linguistic contents in VC.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Emotional Voice Conversion using Multitask Learning with Text-to-speech
summary: Voice conversion (VC) is a task to transform a person's voice to different style while conserving linguistic contents. Previous state-of-the-art on VC is based on sequence-to-sequence (seq2seq) model, which could mislead linguistic information. There was an attempt to overcome it by using textual supervision, it requires explicit alignment which loses the benefit of using seq2seq model. In this paper, a voice converter using multitask learning with text-to-speech (TTS) is presented. The embedding space of seq2seq-based TTS has abundant information on the text. The role of the decoder of TTS is to convert embedding space to speech, which is same to VC. In the proposed model, the whole network is trained to minimize loss of VC and TTS. VC is expected to capture more linguistic information and to preserve training stability by multitask learning. Experiments of VC were performed on a male Korean emotional text-speech dataset, and it is shown that multitask learning is helpful to keep linguistic contents in VC.
id: http://arxiv.org/abs/1911.06149v2
judge
Write 'confirmed' or 'excluded' in [] as comment.