sh-lee-prml / PeriodWave

The official Implementation of PeriodWave and PeriodWave-Turbo
MIT License
128 stars 7 forks source link

PeriodWave: Multi-Period Flow Matching for High-Fidelity Waveform Generation
The official implementation of PeriodWave and PeriodWave-Turbo

| |[Hugging Face Spaces]()|Demo page|Demo Page (Turbo)

Sang-Hoon Lee1,2, Ha-Yeong Choi3, Seong-Whan Lee4

1 Department of Software and Computer Engineering, Ajou University, Suwon, Korea
2 Department of Artificial Intelligence, Ajou University, Suwon, Korea
3 AI Tech Lab, KT Corp., Seoul, Korea
4 Department of Artificial Intelligence, Korea University, Seoul, Korea

This repository contains:

Update

24.08.16

In this repositoy, we provide a new paradigm and architecture of Neural Vocoder that enables notably fast training and achieves SOTA performance. With 10 times fewer training times, we acheived State-of-The-Art Performance on LJSpeech and LibriTTS.

First, Train the PeriodWave with conditional flow matching.

Second, Accelerate the PeriodWave with adversarial flow matching optimzation.

image

Todo

PeriodWave

PeriodWave-Turbo

We have compared several methods including different reconstuction losses, distillation methods, and GANs for PeriodWave-Turbo. Finetuning the PeriodWave models with fixed steps could significantly improve the performance! The PeriodWave-Turbo utilized the Multi-scale Mel-spectrogram loss and Adversarial Training (MPD, CQT-D) following BigVGAN-v2. We highly appreciate the authors of BigVGAN for their dedication to the open-source implementation. Thanks to their efforts, we were able to quickly experiment and reduce trial and error.

TTS with PeriodWave

The era of Mel-spectrograms is returning with advancements in models like P-Flow, VoiceBox, E2-TTS, DiTTo-TTS, ARDiT-TTS, and MELLE. PeriodWave can enhance the audio quality of your TTS models, eliminating the need to rely on codec models. Mel-spectrogram with powerful generative models has the potential to surpass neural codec language models in performance.

Getting Started

Pre-requisites

  1. Pytorch >=1.13 and torchaudio >= 0.13
  2. Install requirements
    pip install -r requirements.txt

    Prepare Dataset

  3. Prepare your own Dataset (We utilized LibriTTS dataset without any preprocessing)
  4. Extract Energy Min/Max
    python extract_energy.py
  5. Change energy_max, energy_min in Config.json

Train PeriodWave

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_periodwave.py -c configs/periodwave.json -m periodwave

Train PeriodWave-Turbo

Inference PeriodWave (24 kHz)

# PeriodWave
CUDA_VISIBLE_DEVICES=0 python inference.py --ckpt "logs/periodwave_base_libritts/G_1000000.pth" --iter 16 --noise_scale 0.667 --solver 'midpoint'

# PeriodWave with FreeU (--s_w 0.9 --b_w 1.1)
# Decreasing skip features could reduce the high-frequency noise of generated samples
# We only recommend using FreeU with PeriodWave. Note that PeriodWave-Turbe with FreeU has different aspects so we do not use FreeU with PeriodWave-Turbo. 
CUDA_VISIBLE_DEVICES=0 python inference_with_FreeU.py --ckpt "logs/periodwave_libritts/G_1000000.pth" --iter 16 --noise_scale 0.667 --solver 'midpoint' --s_w 0.9 --b_w 1.1

# PeriodWave-Turbo-4steps (Highly Recommended)
CUDA_VISIBLE_DEVICES=0 python inference.py --ckpt "logs/periodwave_turbo_base_step4_libritts_24000hz/G_274000.pth" --iter 4 --noise_scale 1 --solver 'euler'

Reference

Flow Matching for high-quality and efficient generative model

Inspired by the multi-period discriminator of HiFi-GAN, we first distillate the multi-periodic property in generator

Prior Distribution

Frequency-wise waveform modeling due to the limitation of high-frequency modeling

High-efficient temporal modeling

Large-scale Universal Vocoder