There are two areas in the software that don't function correctly at the moment due to audio processing. The backend audio services work with raw audio data, while the transcription library expects WAV data. The frontend captures audio through the browser without a reliable way to detect the audio configuration (transmission of the config is already implemented).
The exact problems are the following:
backend/app/services/transcription/recorder: It is responsible for capturing audio data (from an event) and creating a stream of WAV-formatted bytes (unless a workaround exists). The raw data is correctly relayed; the only problem is creating a WAV formatted stream.
software/frontend/wwwroot/audioRecorder.js: this is the code responsible for starting a microphone through the web browser and sending the audio bytes to the backend. The main problem is resolving the config info. Transmitting the config already works.
There are two areas in the software that don't function correctly at the moment due to audio processing. The backend audio services work with raw audio data, while the transcription library expects WAV data. The frontend captures audio through the browser without a reliable way to detect the audio configuration (transmission of the config is already implemented).
The exact problems are the following:
backend/app/services/transcription/recorder
: It is responsible for capturing audio data (from an event) and creating a stream of WAV-formatted bytes (unless a workaround exists). The raw data is correctly relayed; the only problem is creating a WAV formatted stream.software/frontend/wwwroot/audioRecorder.js
: this is the code responsible for starting a microphone through the web browser and sending the audio bytes to the backend. The main problem is resolving the config info. Transmitting the config already works.