A G2P library in PyTorch
DeepPhonemizer is a library for grapheme to phoneme conversion based on Transformer models. It is intended to be used in text-to-speech production systems with high accuracy and efficiency. You can choose between a forward Transformer model (trained with CTC) and its autoregressive counterpart. The former is faster and more stable while the latter is slightly more accurate.
The main advantages of this repo are:
Check out the inference and training tutorials on Colab!
Read the documentation at: https://as-ideas.github.io/DeepPhonemizer/
pip install deep-phonemizer
Download the pretrained model: en_us_cmudict_ipa_forward
from dp.phonemizer import Phonemizer
phonemizer = Phonemizer.from_checkpoint('en_us_cmudict_ipa.pt')
phonemizer('Phonemizing an English text is imposimpable!', lang='en_us')
'foʊnɪmaɪzɪŋ æn ɪŋglɪʃ tɛkst ɪz ɪmpəzɪmpəbəl!'
You can easily train your own autoregressive or forward transformer model. All necessary parameters are set in a config.yaml, which you can find under:
dp/configs/forward_config.yaml
dp/configs/autoreg_config.yaml
for the forward and autoregressive transformer model, respectively.
Distributed training is supported. You can specify which GPUs to utilize by setting CUDA_VISIBLE_DEVICES env variable:
CUDA_VISIBLE_DEVICES=0,1 python run_training.py
Inside the training script prepare data in a tuple-format and use the preprocess and train API:
from dp.preprocess import preprocess
from dp.train import train
train_data = [('en_us', 'young', 'jʌŋ'),
('de', 'benützten', 'bənʏt͡stn̩'),
('de', 'gewürz', 'ɡəvʏʁt͡s')] * 1000
val_data = [('en_us', 'young', 'jʌŋ'),
('de', 'benützten', 'bənʏt͡stn̩')] * 100
config_file = 'dp/configs/forward_config.yaml'
preprocess(config_file=config_file,
train_data=train_data,
val_data=val_data,
deduplicate_train_data=False)
num_gpus = torch.cuda.device_count()
if num_gpus > 1:
mp.spawn(train, nprocs=num_gpus, args=(num_gpus, config_file))
else:
train(rank=0, num_gpus=num_gpus, config_file=config_file)
Model checkpoints will be stored in the checkpoints path that is provided by the config.yaml.
Load the phonemizer from a checkpoint and run a prediction. By default, the phonemizer stores a dictionary of word-phoneme mappings that is applied first, and it uses the Transformer model only to predict out-of-dictionary words.
from dp.phonemizer import Phonemizer
phonemizer = Phonemizer.from_checkpoint('checkpoints/best_model.pt')
phonemes = phonemizer('Phonemizing an English text is imposimpable!', lang='en_us')
If you need more inference information, you can use following API:
from dp.phonemizer import Phonemizer
result = phonemizer.phonemise_list(['Phonemizing an English text is imposimpable!'], lang='en_us')
for word, pred in result.predictions.items():
print(f'{word} {pred.phonemes} {pred.confidence}')
Model | Language | Dataset | Repo Version |
---|---|---|---|
en_us_cmudict_ipa_forward | en_us | cmudict-ipa | 0.0.10 |
en_us_cmudict_forward | en_us | cmudict | 0.0.10 |
latin_ipa_forward | en_uk, en_us, de, fr, es | wikipron | 0.0.10 |
You can easily export the underlying transformer models with TorchScript:
import torch
from dp.phonemizer import Phonemizer
phonemizer = Phonemizer.from_checkpoint('checkpoints/best_model.pt')
model = phonemizer.predictor.model
phonemizer.predictor.model = torch.jit.script(model)
phonemizer('Running the torchscript model!')
Transformer based Grapheme-to-Phoneme Conversion
GRAPHEME-TO-PHONEME CONVERSION USING LONG SHORT-TERM MEMORY RECURRENT NEURAL NETWORKS