I'm looking to run SW in real-time, and wanted to ask whether you've published latency benchmarks for SW for real-time streaming inference. I'm interested in the model's optimal chunk-size, future context and historical context needed (receptive field) and the model's RTF.
Hi,
I'm looking to run SW in real-time, and wanted to ask whether you've published latency benchmarks for SW for real-time streaming inference. I'm interested in the model's optimal chunk-size, future context and historical context needed (receptive field) and the model's RTF.
Thanks!