Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Progressive Residual Extraction based Pre-training for Speech Representation Learning
summary: Self-supervised learning (SSL) has garnered significant attention in speech
processing, excelling in linguistic tasks such as speech recognition. However,
jointly improving the performance of pre-trained models on various downstream
tasks, each requiring different speech information, poses significant
challenges. To this purpose, we propose a progressive residual extraction based
self-supervised learning method, named ProgRE. Specifically, we introduce two
lightweight and specialized task modules into an encoder-style SSL backbone to
enhance its ability to extract pitch variation and speaker information from
speech. Furthermore, to prevent the interference of reinforced pitch variation
and speaker information with irrelevant content information learning, we
residually remove the information extracted by these two modules from the main
branch. The main branch is then trained using HuBERT's speech masking
prediction to ensure the performance of the Transformer's deep-layer features
on content tasks. In this way, we can progressively extract pitch variation,
speaker, and content representations from the input speech. Finally, we can
combine multiple representations with diverse speech information using
different layer weights to obtain task-specific representations for various
downstream tasks. Experimental results indicate that our proposed method
achieves joint performance improvements on various tasks, such as speaker
identification, speech recognition, emotion recognition, speech enhancement,
and voice conversion, compared to excellent SSL methods such as wav2vec2.0,
HuBERT, and WavLM.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Progressive Residual Extraction based Pre-training for Speech Representation Learning
summary: Self-supervised learning (SSL) has garnered significant attention in speech processing, excelling in linguistic tasks such as speech recognition. However, jointly improving the performance of pre-trained models on various downstream tasks, each requiring different speech information, poses significant challenges. To this purpose, we propose a progressive residual extraction based self-supervised learning method, named ProgRE. Specifically, we introduce two lightweight and specialized task modules into an encoder-style SSL backbone to enhance its ability to extract pitch variation and speaker information from speech. Furthermore, to prevent the interference of reinforced pitch variation and speaker information with irrelevant content information learning, we residually remove the information extracted by these two modules from the main branch. The main branch is then trained using HuBERT's speech masking prediction to ensure the performance of the Transformer's deep-layer features on content tasks. In this way, we can progressively extract pitch variation, speaker, and content representations from the input speech. Finally, we can combine multiple representations with diverse speech information using different layer weights to obtain task-specific representations for various downstream tasks. Experimental results indicate that our proposed method achieves joint performance improvements on various tasks, such as speaker identification, speech recognition, emotion recognition, speech enhancement, and voice conversion, compared to excellent SSL methods such as wav2vec2.0, HuBERT, and WavLM.
id: http://arxiv.org/abs/2409.00387v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.