Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Zero-shot Voice Conversion with Diffusion Transformers
summary: Zero-shot voice conversion aims to transform a source speech utterance to
match the timbre of a reference speech from an unseen speaker. Traditional
approaches struggle with timbre leakage, insufficient timbre representation,
and mismatches between training and inference tasks. We propose Seed-VC, a
novel framework that addresses these issues by introducing an external timbre
shifter during training to perturb the source speech timbre, mitigating leakage
and aligning training with inference. Additionally, we employ a diffusion
transformer that leverages the entire reference speech context, capturing
fine-grained timbre features through in-context learning. Experiments
demonstrate that Seed-VC outperforms strong baselines like OpenVoice and
CosyVoice, achieving higher speaker similarity and lower word error rates in
zero-shot voice conversion tasks. We further extend our approach to zero-shot
singing voice conversion by incorporating fundamental frequency (F0)
conditioning, resulting in comparative performance to current state-of-the-art
methods. Our findings highlight the effectiveness of Seed-VC in overcoming core
challenges, paving the way for more accurate and versatile voice conversion
systems.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Zero-shot Voice Conversion with Diffusion Transformers
summary: Zero-shot voice conversion aims to transform a source speech utterance to match the timbre of a reference speech from an unseen speaker. Traditional approaches struggle with timbre leakage, insufficient timbre representation, and mismatches between training and inference tasks. We propose Seed-VC, a novel framework that addresses these issues by introducing an external timbre shifter during training to perturb the source speech timbre, mitigating leakage and aligning training with inference. Additionally, we employ a diffusion transformer that leverages the entire reference speech context, capturing fine-grained timbre features through in-context learning. Experiments demonstrate that Seed-VC outperforms strong baselines like OpenVoice and CosyVoice, achieving higher speaker similarity and lower word error rates in zero-shot voice conversion tasks. We further extend our approach to zero-shot singing voice conversion by incorporating fundamental frequency (F0) conditioning, resulting in comparative performance to current state-of-the-art methods. Our findings highlight the effectiveness of Seed-VC in overcoming core challenges, paving the way for more accurate and versatile voice conversion systems.
id: http://arxiv.org/abs/2411.09943v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.