WWW
As a researcher wanting to understand the impact of phonetic error correction using language models on word-level recognition from dysarthric speakers I would like to run an experiment using acoustic models trained on TORGO. Acoustic models are trained on a per-speaker basis on two different scenarios.
[ ] A previously trained phonetic language model and its use in speech recognition is here : https://github.com/SlangLab-NU/links (Look at PSST with LM)
[ ] For a G2P tool that allows for P2G we might want to look at CMU's phoneme to grapheme and grapheme to phoneme tool for inverse G2P (https://github.com/cmusphinx/g2p-seq2seq)
AC
A set of experiments on word level ASR output with phoneme level correction
WWW As a researcher wanting to understand the impact of phonetic error correction using language models on word-level recognition from dysarthric speakers I would like to run an experiment using acoustic models trained on TORGO. Acoustic models are trained on a per-speaker basis on two different scenarios.
AC A set of experiments on word level ASR output with phoneme level correction