NVIDIA / NeMo

A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html
Apache License 2.0
10.85k stars 2.26k forks source link

"greedy_batched" methods should support "partial_hypotheses" option #9040

Closed galv closed 1 day ago

galv commented 2 months ago

Is your feature request related to a problem? Please describe.

I've been experimenting with examples/asr/asr_cache_aware_streaming/speech_to_text_cache_aware_streaming_infer.py. One of the things I've noticed is that the "greedy_batched" strategy does not support partial hypotheses. We should add support for this. Right now, streaming of RNN-T models is horrendously slow because we are running the decoder at batch size 1, because we must use the "greedy" strategy when doing streaming. The encoder basically isn't meaningfully contributing to the runtime. The decoder is the main slowdown.

FYI @artbataev .

github-actions[bot] commented 1 month ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 1 week ago

This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] commented 1 day ago

This issue was closed because it has been inactive for 7 days since being marked as stale.