Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: DisC-VC: Disentangled and F0-Controllable Neural Voice Conversion
summary: Voice conversion is a task to convert a non-linguistic feature of a given
utterance. Since naturalness of speech strongly depends on its pitch pattern,
in some applications, it would be desirable to keep the original rise/fall
pitch pattern while changing the speaker identity. Some of the existing methods
address this problem by either using a source-filter model or developing a
neural network that takes an F0 pattern as input to the model. Although the
latter approach can achieve relatively high sound quality compared to the
former one, there is no consideration for discrepancy between the target and
generated F0 patterns in its training process. In this paper, we propose a new
variational-autoencoder-based voice conversion model accompanied by an
auxiliary network, which ensures that the conversion result correctly reflects
the specified F0/timbre information. We show the effectiveness of the proposed
method by objective and subjective evaluations.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: DisC-VC: Disentangled and F0-Controllable Neural Voice Conversion
summary: Voice conversion is a task to convert a non-linguistic feature of a given utterance. Since naturalness of speech strongly depends on its pitch pattern, in some applications, it would be desirable to keep the original rise/fall pitch pattern while changing the speaker identity. Some of the existing methods address this problem by either using a source-filter model or developing a neural network that takes an F0 pattern as input to the model. Although the latter approach can achieve relatively high sound quality compared to the former one, there is no consideration for discrepancy between the target and generated F0 patterns in its training process. In this paper, we propose a new variational-autoencoder-based voice conversion model accompanied by an auxiliary network, which ensures that the conversion result correctly reflects the specified F0/timbre information. We show the effectiveness of the proposed method by objective and subjective evaluations.
id: http://arxiv.org/abs/2210.11059v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.