Open FredTheNoob opened 5 days ago
Look at my fork, maybe it will help you https://github.com/marcinmatys/whisper_streaming/blob/main/README2.md
https://github.com/ufal/whisper_streaming/issues/105
I have implemented something similar. On server side I detect silence and then clear buffer.
Just tried implementing your client-side code, I get this error in the console:
You have to allow access to microfone in browser (Chrome, Firefox)
I have done that and the error still appears
I tested a moment ago and it works for me. You have to debug async function startRecording() Maybe error ocurs on this line await audioContext.audioWorklet.addModule('static/voice-processor.js');
I had some time ago similar error but I have wrong url in addModule call
I have the following frontend code which sends audio data over a websocket in the browser (using the microphone):
It uses the MediaRecorder API to send an audio chunk every 2 seconds. This is recieved on the backend like this:
main.py:
ASR.py:
The issue happens when I try to clear the audio buffer. My thought is to clear the buffer every time I detect a punctuation meaning a sentence has ended. However clearing the buffer throws the following error: