Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: NANSY++: Unified Voice Synthesis with Neural Analysis and Synthesis
summary: Various applications of voice synthesis have been developed independently
despite the fact that they generate "voice" as output in common. In addition,
most of the voice synthesis models still require a large number of audio data
paired with annotated labels (e.g., text transcription and music score) for
training. To this end, we propose a unified framework of synthesizing and
manipulating voice signals from analysis features, dubbed NANSY++. The backbone
network of NANSY++ is trained in a self-supervised manner that does not require
any annotations paired with audio. After training the backbone network, we
efficiently tackle four voice applications - i.e. voice conversion,
text-to-speech, singing voice synthesis, and voice designing - by partially
modeling the analysis features required for each task. Extensive experiments
show that the proposed framework offers competitive advantages such as
controllability, data efficiency, and fast training convergence, while
providing high quality synthesis. Audio samples: tinyurl.com/8tnsy3uc.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: NANSY++: Unified Voice Synthesis with Neural Analysis and Synthesis
summary: Various applications of voice synthesis have been developed independently despite the fact that they generate "voice" as output in common. In addition, most of the voice synthesis models still require a large number of audio data paired with annotated labels (e.g., text transcription and music score) for training. To this end, we propose a unified framework of synthesizing and manipulating voice signals from analysis features, dubbed NANSY++. The backbone network of NANSY++ is trained in a self-supervised manner that does not require any annotations paired with audio. After training the backbone network, we efficiently tackle four voice applications - i.e. voice conversion, text-to-speech, singing voice synthesis, and voice designing - by partially modeling the analysis features required for each task. Extensive experiments show that the proposed framework offers competitive advantages such as controllability, data efficiency, and fast training convergence, while providing high quality synthesis. Audio samples: tinyurl.com/8tnsy3uc.
id: http://arxiv.org/abs/2211.09407v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.