k2-fsa / k2

FSA/FST algorithms, differentiable, with PyTorch compatibility.
https://k2-fsa.github.io/k2
Apache License 2.0
1.13k stars 215 forks source link

Regarding through put #1074

Closed kbramhendra closed 2 years ago

kbramhendra commented 2 years ago

Hey hi, I am using k2 for decoding. I am trying to improve the throughput. can you suggest any other ways to improve this apart from hyper parameter tuning like lattice_beam size or max_active size.?

Other thing is, https://arxiv.org/abs/1910.10032 GPU-Accelerated Viterbi Exact Lattice Decoder for Batched Online and Offline Speech Recognition , these are implemented in kaldi cuda decoderbin. These improvements are there in k2 as well ?

danpovey commented 2 years ago

That paper was written by the guys at NVidia, I'm afraid we haven't included those kinds of things in k2 so far. We are using quite different models and focusing on RNN-T where the algorithms are a bit different.

kbramhendra commented 2 years ago

Thank you for clarifying.