Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Disentangling the Prosody and Semantic Information with Pre-trained Model for In-Context Learning based Zero-Shot Voice Conversion
summary: Voice conversion (VC) aims to modify the speaker's timbre while retaining
speech content. Previous approaches have tokenized the outputs from
self-supervised into semantic tokens, facilitating disentanglement of speech
content information. Recently, in-context learning (ICL) has emerged in
text-to-speech (TTS) systems for effectively modeling specific characteristics
such as timbre through context conditioning. This paper proposes an ICL
capability enhanced VC system (ICL-VC) employing a mask and reconstruction
training strategy based on flow-matching generative models. Augmented with
semantic tokens, our experiments on the LibriTTS dataset demonstrate that
ICL-VC improves speaker similarity. Additionally, we find that k-means is a
versatile tokenization method applicable to various pre-trained models.
However, the ICL-VC system faces challenges in preserving the prosody of the
source speech. To mitigate this issue, we propose incorporating prosody
embeddings extracted from a pre-trained emotion recognition model into our
system. Integration of prosody embeddings notably enhances the system's
capability to preserve source speech prosody, as validated on the Emotional
Speech Database.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Disentangling the Prosody and Semantic Information with Pre-trained Model for In-Context Learning based Zero-Shot Voice Conversion
summary: Voice conversion (VC) aims to modify the speaker's timbre while retaining speech content. Previous approaches have tokenized the outputs from self-supervised into semantic tokens, facilitating disentanglement of speech content information. Recently, in-context learning (ICL) has emerged in text-to-speech (TTS) systems for effectively modeling specific characteristics such as timbre through context conditioning. This paper proposes an ICL capability enhanced VC system (ICL-VC) employing a mask and reconstruction training strategy based on flow-matching generative models. Augmented with semantic tokens, our experiments on the LibriTTS dataset demonstrate that ICL-VC improves speaker similarity. Additionally, we find that k-means is a versatile tokenization method applicable to various pre-trained models. However, the ICL-VC system faces challenges in preserving the prosody of the source speech. To mitigate this issue, we propose incorporating prosody embeddings extracted from a pre-trained emotion recognition model into our system. Integration of prosody embeddings notably enhances the system's capability to preserve source speech prosody, as validated on the Emotional Speech Database.
id: http://arxiv.org/abs/2409.05004v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.