Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language
Processing
summary: Motivated by the success of T5 (Text-To-Text Transfer Transformer) in
pre-training natural language processing models, we propose a unified-modal
SpeechT5 framework that explores the encoder-decoder pre-training for
self-supervised speech/text representation learning. The SpeechT5 framework
consists of a shared encoder-decoder network and six modal-specific
(speech/text) pre/post-nets. After preprocessing the speech/text input through
the pre-nets, the shared encoder-decoder network models the sequence to
sequence transformation, and then the post-nets generate the output in the
speech/text modality based on the decoder output. Particularly, SpeechT5 can
pre-train on a large scale of unlabeled speech and text data to improve the
capability of the speech and textual modeling. To align the textual and speech
information into a unified semantic space, we propose a cross-modal vector
quantization method with random mixing-up to bridge speech and text. Extensive
evaluations on a wide variety of spoken language processing tasks, including
voice conversion, automatic speech recognition, text to speech, and speaker
identification, show the superiority of the proposed SpeechT5 framework.
Thunk you very much for contribution!
Your judgement is refrected in arXivSearches.json, and is going to be used for VCLab's activity.
Thunk you so much.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
summary: Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-training natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the speech/text input through the pre-nets, the shared encoder-decoder network models the sequence to sequence transformation, and then the post-nets generate the output in the speech/text modality based on the decoder output. Particularly, SpeechT5 can pre-train on a large scale of unlabeled speech and text data to improve the capability of the speech and textual modeling. To align the textual and speech information into a unified semantic space, we propose a cross-modal vector quantization method with random mixing-up to bridge speech and text. Extensive evaluations on a wide variety of spoken language processing tasks, including voice conversion, automatic speech recognition, text to speech, and speaker identification, show the superiority of the proposed SpeechT5 framework.
id: http://arxiv.org/abs/2110.07205v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.