theodorblackbird / lina-speech

lina-speech : linear attention based text-to-speech
Other
99 stars 9 forks source link

lina-speech (beta)

Exploring "linear attention" for text-to-speech.

It predicts audio codec "à la" MusicGen : delayed residual vector quantizers so that we do not need multiple models.

Featuring RWKV, Mamba, Gated Linear Attention.

Compared to other LM TTS model :

Models

Model #Params Dataset Checkpoint Steps Note
GLA 60M, 130M Librilight-medium Download 300k GPU inference only
Mamba 60M Librilight-medium Download 300k GPU inference only
RWKV v6 60M LibriTTS Download 150k GPU inference only

Installation

Following the linear complexity LM you choose, follow respective instructions first:

Inference

Download configuration and weights above, then check Inference.ipynb.

TODO

Acknowledgment

Cite

@software{lemerle2024linaspeech,
  title  = {LinaSpeech: Exploring "linear attention" for text-to-speech.},
  author = {Lemerle, Théodor},
  url    = {https://github.com/theodorblackbird/lina-speech},
  month  = april,
  year   = {2024}
}

IRCAM

This work is performed in the Analysis/Synthesis team of the STMS Laboratory at IRCAM, and is part of the following project: ANR Exovoices