Closed kbramhendra closed 2 years ago
That paper was written by the guys at NVidia, I'm afraid we haven't included those kinds of things in k2 so far. We are using quite different models and focusing on RNN-T where the algorithms are a bit different.
Thank you for clarifying.
Hey hi, I am using k2 for decoding. I am trying to improve the throughput. can you suggest any other ways to improve this apart from hyper parameter tuning like lattice_beam size or max_active size.?
Other thing is, https://arxiv.org/abs/1910.10032 GPU-Accelerated Viterbi Exact Lattice Decoder for Batched Online and Offline Speech Recognition , these are implemented in kaldi cuda decoderbin. These improvements are there in k2 as well ?