Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: An Adaptive Learning based Generative Adversarial Network for One-To-One
Voice Conversion
summary: Voice Conversion (VC) emerged as a significant domain of research in the
field of speech synthesis in recent years due to its emerging application in
voice-assisting technology, automated movie dubbing, and speech-to-singing
conversion to name a few. VC basically deals with the conversion of vocal style
of one speaker to another speaker while keeping the linguistic contents
unchanged. VC task is performed through a three-stage pipeline consisting of
speech analysis, speech feature mapping, and speech reconstruction. Nowadays
the Generative Adversarial Network (GAN) models are widely in use for speech
feature mapping from source to target speaker. In this paper, we propose an
adaptive learning-based GAN model called ALGAN-VC for an efficient one-to-one
VC of speakers. Our ALGAN-VC framework consists of some approaches to improve
the speech quality and voice similarity between source and target speakers. The
model incorporates a Dense Residual Network (DRN) like architecture to the
generator network for efficient speech feature learning, for source to target
speech feature conversion. We also integrate an adaptive learning mechanism to
compute the loss function for the proposed model. Moreover, we use a boosted
learning rate approach to enhance the learning capability of the proposed
model. The model is trained by using both forward and inverse mapping
simultaneously for a one-to-one VC. The proposed model is tested on Voice
Conversion Challenge (VCC) 2016, 2018, and 2020 datasets as well as on our
self-prepared speech dataset, which has been recorded in Indian regional
languages and in English. A subjective and objective evaluation of the
generated speech samples indicated that the proposed model elegantly performed
the voice conversion task by achieving high speaker similarity and adequate
speech quality.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: An Adaptive Learning based Generative Adversarial Network for One-To-One Voice Conversion
summary: Voice Conversion (VC) emerged as a significant domain of research in the field of speech synthesis in recent years due to its emerging application in voice-assisting technology, automated movie dubbing, and speech-to-singing conversion to name a few. VC basically deals with the conversion of vocal style of one speaker to another speaker while keeping the linguistic contents unchanged. VC task is performed through a three-stage pipeline consisting of speech analysis, speech feature mapping, and speech reconstruction. Nowadays the Generative Adversarial Network (GAN) models are widely in use for speech feature mapping from source to target speaker. In this paper, we propose an adaptive learning-based GAN model called ALGAN-VC for an efficient one-to-one VC of speakers. Our ALGAN-VC framework consists of some approaches to improve the speech quality and voice similarity between source and target speakers. The model incorporates a Dense Residual Network (DRN) like architecture to the generator network for efficient speech feature learning, for source to target speech feature conversion. We also integrate an adaptive learning mechanism to compute the loss function for the proposed model. Moreover, we use a boosted learning rate approach to enhance the learning capability of the proposed model. The model is trained by using both forward and inverse mapping simultaneously for a one-to-one VC. The proposed model is tested on Voice Conversion Challenge (VCC) 2016, 2018, and 2020 datasets as well as on our self-prepared speech dataset, which has been recorded in Indian regional languages and in English. A subjective and objective evaluation of the generated speech samples indicated that the proposed model elegantly performed the voice conversion task by achieving high speaker similarity and adequate speech quality.
id: http://arxiv.org/abs/2104.12159v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.