shtoshni / g2p

Code for SLT 2016 paper on Grapheme-to-Phoneme conversion using attention based encoder-decoder models
15 stars 3 forks source link

Jointly learning to align and convert graphemes to phonemes with neural attention models

Grapheme-to-Phoneme (G2P) conversion using attention based encoder-decoder models

Dependencies

Evaluation Datasets

We used the following datasets provided by Stanley Chen (stanchen@us.ibm.com):

Note - For CMUDict, it might be a good idea to use the newer version from here - https://raw.githubusercontent.com/cmusphinx/cmudict/master/cmudict.dict

Steps

Reference

Jointly learning to align and convert graphemes to phonemes with neural attention models by Shubham Toshniwal and Karen Livescu.

Here's the [BIBTEX] entry for citation ease.