Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Differentiable WORLD Synthesizer-based Neural Vocoder With Application
To End-To-End Audio Style Transfer
summary: In this paper, we propose a differentiable WORLD synthesizer and demonstrate
its use in end-to-end audio style transfer tasks such as (singing) voice
conversion and the DDSP timbre transfer task. Accordingly, our baseline
differentiable synthesizer has no model parameters, yet it yields adequate
synthesis quality. We can extend the baseline synthesizer by appending
lightweight black-box postnets which apply further processing to the baseline
output in order to improve fidelity. An alternative differentiable approach
considers extraction of the source excitation spectrum directly, which can
improve naturalness albeit for a narrower class of style transfer applications.
The acoustic feature parameterization used by our approaches has the added
benefit that it naturally disentangles pitch and timbral information so that
they can be modeled separately. Moreover, as there exists a robust means of
estimating these acoustic features from monophonic audio sources, it allows for
parameter loss terms to be added to an end-to-end objective function, which can
help convergence and/or further stabilize (adversarial) training.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Differentiable WORLD Synthesizer-based Neural Vocoder With Application To End-To-End Audio Style Transfer
summary: In this paper, we propose a differentiable WORLD synthesizer and demonstrate its use in end-to-end audio style transfer tasks such as (singing) voice conversion and the DDSP timbre transfer task. Accordingly, our baseline differentiable synthesizer has no model parameters, yet it yields adequate synthesis quality. We can extend the baseline synthesizer by appending lightweight black-box postnets which apply further processing to the baseline output in order to improve fidelity. An alternative differentiable approach considers extraction of the source excitation spectrum directly, which can improve naturalness albeit for a narrower class of style transfer applications. The acoustic feature parameterization used by our approaches has the added benefit that it naturally disentangles pitch and timbral information so that they can be modeled separately. Moreover, as there exists a robust means of estimating these acoustic features from monophonic audio sources, it allows for parameter loss terms to be added to an end-to-end objective function, which can help convergence and/or further stabilize (adversarial) training.
id: http://arxiv.org/abs/2208.07282v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.