Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Articulatory Encodec: Vocal Tract Kinematics as a Codec for Speech
summary: Vocal tract articulation is a natural, grounded control space of speech
production. The spatiotemporal coordination of articulators combined with the
vocal source shapes intelligible speech sounds to enable effective spoken
communication. Based on this physiological grounding of speech, we propose a
new framework of neural encoding-decoding of speech -- articulatory encodec.
The articulatory encodec comprises an articulatory analysis model that infers
articulatory features from speech audio, and an articulatory synthesis model
that synthesizes speech audio from articulatory features. The articulatory
features are kinematic traces of vocal tract articulators and source features,
which are intuitively interpretable and controllable, being the actual physical
interface of speech production. An additional speaker identity encoder is
jointly trained with the articulatory synthesizer to inform the voice texture
of individual speakers. By training on large-scale speech data, we achieve a
fully intelligible, high-quality articulatory synthesizer that generalizes to
unseen speakers. Furthermore, the speaker embedding is effectively disentangled
from articulations, which enables accent-perserving zero-shot voice conversion.
To the best of our knowledge, this is the first demonstration of universal,
high-performance articulatory inference and synthesis, suggesting the proposed
framework as a powerful coding system of speech.
Please check whether this paper is about 'Voice Conversion' or not.
article info.
title: Articulatory Encodec: Vocal Tract Kinematics as a Codec for Speech
summary: Vocal tract articulation is a natural, grounded control space of speech production. The spatiotemporal coordination of articulators combined with the vocal source shapes intelligible speech sounds to enable effective spoken communication. Based on this physiological grounding of speech, we propose a new framework of neural encoding-decoding of speech -- articulatory encodec. The articulatory encodec comprises an articulatory analysis model that infers articulatory features from speech audio, and an articulatory synthesis model that synthesizes speech audio from articulatory features. The articulatory features are kinematic traces of vocal tract articulators and source features, which are intuitively interpretable and controllable, being the actual physical interface of speech production. An additional speaker identity encoder is jointly trained with the articulatory synthesizer to inform the voice texture of individual speakers. By training on large-scale speech data, we achieve a fully intelligible, high-quality articulatory synthesizer that generalizes to unseen speakers. Furthermore, the speaker embedding is effectively disentangled from articulations, which enables accent-perserving zero-shot voice conversion. To the best of our knowledge, this is the first demonstration of universal, high-performance articulatory inference and synthesis, suggesting the proposed framework as a powerful coding system of speech.
id: http://arxiv.org/abs/2406.12998v1
judge
Write [vclab::confirmed] or [vclab::excluded] in comment.