If we use Linear PCM (LPCM), which doesn't include headers and allows any segment of the audio to be played independently, we can send the audio chunks to the backend. With a method for silence detection, we could split the real-time audio into chunks and send these to OpenAI's Whisper for transcription. This approach could enable near real-time transcription display on the frontend, store the transcription, and perform summarization or other AI tasks.
I'm interested in finding out how to send the minimum amount of bytes with sufficient quality to a cloud-based Whisper deployment for transcription. This transcription could be saved as metadata for the audio recording, and the audio itself backed up to a cloud location. Additionally, generating .srt files with timestamps would allow users to jump to specific audio segments corresponding to the subtitles.
If we use Linear PCM (LPCM), which doesn't include headers and allows any segment of the audio to be played independently, we can send the audio chunks to the backend. With a method for silence detection, we could split the real-time audio into chunks and send these to OpenAI's Whisper for transcription. This approach could enable near real-time transcription display on the frontend, store the transcription, and perform summarization or other AI tasks.
I'm interested in finding out how to send the minimum amount of bytes with sufficient quality to a cloud-based Whisper deployment for transcription. This transcription could be saved as metadata for the audio recording, and the audio itself backed up to a cloud location. Additionally, generating .srt files with timestamps would allow users to jump to specific audio segments corresponding to the subtitles.