Open mike-2020 opened 2 years ago
No
Hi, any other STTs you know of that do? I'm just switched to the large model and am having longer wait times as in the issue linked below. However, it recognizes the text right away, but considers it only partial. So there must be a parameter to adjust that.
Hi, any other STTs you know of that do?
You'd better describe your hardware and your goals. There are several ASR systems that can use many CPU cores for example. There are some using neural units on the phone.
Mali GPU is actually very slow, not very helpful for deep learning. It is very task-specific.
Thanks for the reply. I actually meant a desktop pc GPU like in the other thread. I have a 3060 in a dual Xeon 2687 workstation. My hunch is that the CPU is not the bottleneck.. that not going from "partial" to "result" is the problem. but I haven't found a way to set the sensitivity for completing to "result" . I should probably open a separate issue for that.. here: https://github.com/alphacep/vosk-api/issues/1156
Hello,
Does vosk support ARM GPU (e.g. Mali GPU)?
Thanks