Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
summary: In this paper, we propose Prosody-aware VITS (PAVITS) for emotional voice
conversion (EVC), aiming to achieve two major objectives of EVC: high content
naturalness and high emotional naturalness, which are crucial for meeting the
demands of human perception. To improve the content naturalness of converted
audio, we have developed an end-to-end EVC architecture inspired by the high
audio quality of VITS. By seamlessly integrating an acoustic converter and
vocoder, we effectively address the common issue of mismatch between emotional
prosody training and run-time conversion that is prevalent in existing EVC
models. To further enhance the emotional naturalness, we introduce an emotion
descriptor to model the subtle prosody variations of different speech emotions.
Additionally, we propose a prosody predictor, which predicts prosody features
from text based on the provided emotion label. Notably, we introduce a prosody
alignment loss to establish a connection between latent prosody features from
two distinct modalities, ensuring effective training. Experimental results show
that the performance of PAVITS is superior to the state-of-the-art EVC methods.
Speech Samples are available at https://jeremychee4.github.io/pavits4EVC/ .
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: PAVITS: Exploring Prosody-aware VITS for End-to-End Emotional Voice Conversion
summary: In this paper, we propose Prosody-aware VITS (PAVITS) for emotional voice conversion (EVC), aiming to achieve two major objectives of EVC: high content naturalness and high emotional naturalness, which are crucial for meeting the demands of human perception. To improve the content naturalness of converted audio, we have developed an end-to-end EVC architecture inspired by the high audio quality of VITS. By seamlessly integrating an acoustic converter and vocoder, we effectively address the common issue of mismatch between emotional prosody training and run-time conversion that is prevalent in existing EVC models. To further enhance the emotional naturalness, we introduce an emotion descriptor to model the subtle prosody variations of different speech emotions. Additionally, we propose a prosody predictor, which predicts prosody features from text based on the provided emotion label. Notably, we introduce a prosody alignment loss to establish a connection between latent prosody features from two distinct modalities, ensuring effective training. Experimental results show that the performance of PAVITS is superior to the state-of-the-art EVC methods. Speech Samples are available at https://jeremychee4.github.io/pavits4EVC/ .
id: http://arxiv.org/abs/2403.01494v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.