Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: vec2wav 2.0: Advancing Voice Conversion via Discrete Token Vocoders
summary: We propose a new speech discrete token vocoder, vec2wav 2.0, which advances
voice conversion (VC). We use discrete tokens from speech self-supervised
models as the content features of source speech, and treat VC as a prompted
vocoding task. To amend the loss of speaker timbre in the content tokens,
vec2wav 2.0 utilizes the WavLM features to provide strong timbre-dependent
information. A novel adaptive Snake activation function is proposed to better
incorporate timbre into the waveform reconstruction process. In this way,
vec2wav 2.0 learns to alter the speaker timbre appropriately given different
reference prompts. Also, no supervised data is required for vec2wav 2.0 to be
effectively trained. Experimental results demonstrate that vec2wav 2.0
outperforms all other baselines to a considerable margin in terms of audio
quality and speaker similarity in any-to-any VC. Ablation studies verify the
effects made by the proposed techniques. Moreover, vec2wav 2.0 achieves
competitive cross-lingual VC even only trained on monolingual corpus. Thus,
vec2wav 2.0 shows timbre can potentially be manipulated only by speech token
vocoders, pushing the frontiers of VC and speech synthesis.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: vec2wav 2.0: Advancing Voice Conversion via Discrete Token Vocoders
summary: We propose a new speech discrete token vocoder, vec2wav 2.0, which advances voice conversion (VC). We use discrete tokens from speech self-supervised models as the content features of source speech, and treat VC as a prompted vocoding task. To amend the loss of speaker timbre in the content tokens, vec2wav 2.0 utilizes the WavLM features to provide strong timbre-dependent information. A novel adaptive Snake activation function is proposed to better incorporate timbre into the waveform reconstruction process. In this way, vec2wav 2.0 learns to alter the speaker timbre appropriately given different reference prompts. Also, no supervised data is required for vec2wav 2.0 to be effectively trained. Experimental results demonstrate that vec2wav 2.0 outperforms all other baselines to a considerable margin in terms of audio quality and speaker similarity in any-to-any VC. Ablation studies verify the effects made by the proposed techniques. Moreover, vec2wav 2.0 achieves competitive cross-lingual VC even only trained on monolingual corpus. Thus, vec2wav 2.0 shows timbre can potentially be manipulated only by speech token vocoders, pushing the frontiers of VC and speech synthesis.
id: http://arxiv.org/abs/2409.01995v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.