Open ice10101 opened 1 week ago
Hello. Thanks for the question.
Most likely what you need is to enable in the setting "Listening Mode: Always On". By default, the "One Sentence" mode is used, which processes audio only up to the first non-speech fragment (up to first silence). When "Always On" is enabled, audio is processed continuously (Always), but still in chunks limited by moments of data other than speech. The "Always On" model runs more smoothly if speech includes occasional moments of silence.
Any advice? Is there any limit on input\output information?
As for the input audio, there is no specific fixed limit. Speech Note always tries to detect moments of silence in the audio, and process the data in chunks. This should mintage the problem of "eating" all of available RAM in your system. I didn't test it on a very long audio files but you should be able to transcribe 30 minutes of live speech without a problem.
Not using GPU acceleration
I recommend you trying and testing the Beta version. It has few unsolved bugs but it also has significantly improved CPU-only processing speed in WhisperCpp models. What's more, you can try the "OpenVINO" CPU acceleration, which speeds up STT even more with WhisperCpp. To enable "flathub-beta" follow these instructions.
First of all thank you for a great software. Everything works fine exept one issue.
I can talk for several minutes but eventually getting just several first sentenses after processing. So I have to split my speech to short pieces and it's quite annoying.
Tried whisper and fast whisper models
Not using GPU acceleration
Any advice? Is there any limit on input\output information?