Unofficial PyTorch implementation of MelGAN vocoder
Tested on Python 3.6
pip install -r requirements.txt
python preprocess.py -c config/default.yaml -d [data's root path]
yaml
filepython trainer.py -c [config yaml file] -n [name of the run]
cp config/default.yaml config/config.yaml
and then edit config.yaml
*.wav
with corresponding (preprocessed) *.mel
file.tensorboard --logdir logs/
Try with Google Colab: TODO
import torch
vocoder = torch.hub.load('seungwonpark/melgan', 'melgan')
vocoder.eval()
mel = torch.randn(1, 80, 234) # use your own mel-spectrogram here
if torch.cuda.is_available():
vocoder = vocoder.cuda()
mel = mel.cuda()
with torch.no_grad():
audio = vocoder.inference(mel)
python inference.py -p [checkpoint path] -i [input mel path]
See audio samples at: http://swpark.me/melgan/. Model was trained at V100 GPU for 14 days using LJSpeech-1.1.
BSD 3-Clause License.