Open mineshmathew opened 6 years ago
Siddharth, did you add documentation for this to the Eesen repository yet? Thanks, Florian
On Mar 8, 2018, at 12:58 AM, Minesh Mathew notifications@github.com wrote:
I understand that the WFST method requires both a lexicon and a LM ( as n-gram frequencies) But in the README it is mentioned that a character level RNN-LM can be used instead without the need for a lexicon.
I see code to train a char level rnn-lm. But how is it used while decoding ?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/srvk/eesen/issues/175, or mute the thread https://github.com/notifications/unsubscribe-auth/AEnA8QK2MFmpUsWcH7H9DWGgxYYmA9Mrks5tcMh4gaJpZM4SiLro.
Florian Metze http://www.cs.cmu.edu/directory/florian-metze Associate Research Professor Carnegie Mellon University
I understand that the WFST method requires both a lexicon and a LM ( as n-gram frequencies) But in the README it is mentioned that a character level RNN-LM can be used instead without the need for a lexicon.
I see code to train a char level rnn-lm. But how is it used while decoding ?