Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Transplantation of Conversational Speaking Style with Interjections in
Sequence-to-Sequence Speech Synthesis
summary: Sequence-to-Sequence Text-to-Speech architectures that directly generate low
level acoustic features from phonetic sequences are known to produce natural
and expressive speech when provided with adequate amounts of training data.
Such systems can learn and transfer desired speaking styles from one seen
speaker to another (in multi-style multi-speaker settings), which is highly
desirable for creating scalable and customizable Human-Computer Interaction
systems. In this work we explore one-to-many style transfer from a dedicated
single-speaker conversational corpus with style nuances and interjections. We
elaborate on the corpus design and explore the feasibility of such style
transfer when assisted with Voice-Conversion-based data augmentation. In a set
of subjective listening experiments, this approach resulted in high-fidelity
style transfer with no quality degradation. However, a certain voice persona
shift was observed, requiring further improvements in voice conversion.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Transplantation of Conversational Speaking Style with Interjections in Sequence-to-Sequence Speech Synthesis
summary: Sequence-to-Sequence Text-to-Speech architectures that directly generate low level acoustic features from phonetic sequences are known to produce natural and expressive speech when provided with adequate amounts of training data. Such systems can learn and transfer desired speaking styles from one seen speaker to another (in multi-style multi-speaker settings), which is highly desirable for creating scalable and customizable Human-Computer Interaction systems. In this work we explore one-to-many style transfer from a dedicated single-speaker conversational corpus with style nuances and interjections. We elaborate on the corpus design and explore the feasibility of such style transfer when assisted with Voice-Conversion-based data augmentation. In a set of subjective listening experiments, this approach resulted in high-fidelity style transfer with no quality degradation. However, a certain voice persona shift was observed, requiring further improvements in voice conversion.
id: http://arxiv.org/abs/2207.12262v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.