Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: HAM-TTS: Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling
summary: Token-based text-to-speech (TTS) models have emerged as a promising avenue
for generating natural and realistic speech, yet they grapple with low
pronunciation accuracy, speaking style and timbre inconsistency, and a
substantial need for diverse training data. In response, we introduce a novel
hierarchical acoustic modeling approach complemented by a tailored data
augmentation strategy and train it on the combination of real and synthetic
data, scaling the data size up to 650k hours, leading to the zero-shot TTS
model with 0.8B parameters. Specifically, our method incorporates a latent
variable sequence containing supplementary acoustic information based on
refined self-supervised learning (SSL) discrete units into the TTS model by a
predictor. This significantly mitigates pronunciation errors and style
mutations in synthesized speech. During training, we strategically replace and
duplicate segments of the data to enhance timbre uniformity. Moreover, a
pretrained few-shot voice conversion model is utilized to generate a plethora
of voices with identical content yet varied timbres. This facilitates the
explicit learning of utterance-level one-to-many mappings, enriching speech
diversity and also ensuring consistency in timbre. Comparative experiments
(Demo page: https://anonymous.4open.science/w/ham-tts/)demonstrate our model's
superiority over VALL-E in pronunciation precision and maintaining speaking
style, as well as timbre continuity.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: HAM-TTS: Hierarchical Acoustic Modeling for Token-Based Zero-Shot Text-to-Speech with Model and Data Scaling
summary: Token-based text-to-speech (TTS) models have emerged as a promising avenue for generating natural and realistic speech, yet they grapple with low pronunciation accuracy, speaking style and timbre inconsistency, and a substantial need for diverse training data. In response, we introduce a novel hierarchical acoustic modeling approach complemented by a tailored data augmentation strategy and train it on the combination of real and synthetic data, scaling the data size up to 650k hours, leading to the zero-shot TTS model with 0.8B parameters. Specifically, our method incorporates a latent variable sequence containing supplementary acoustic information based on refined self-supervised learning (SSL) discrete units into the TTS model by a predictor. This significantly mitigates pronunciation errors and style mutations in synthesized speech. During training, we strategically replace and duplicate segments of the data to enhance timbre uniformity. Moreover, a pretrained few-shot voice conversion model is utilized to generate a plethora of voices with identical content yet varied timbres. This facilitates the explicit learning of utterance-level one-to-many mappings, enriching speech diversity and also ensuring consistency in timbre. Comparative experiments (Demo page: https://anonymous.4open.science/w/ham-tts/)demonstrate our model's superiority over VALL-E in pronunciation precision and maintaining speaking style, as well as timbre continuity.
id: http://arxiv.org/abs/2403.05989v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.