Although I had already written code to read audio from multiple microphones and save them to wav files, I rewrote the code for the following reasons:
So the code was configurable using Hydra
I had to join the audio data from the microphone array and put them in a single wav file in order to do speech transcription using OpenAI's speech-to-text (Whisper). I saved this wave file under the folder structure of audio_files/single_channel/output.wav.
I saved the separate wav files for each microphone into a separate folder, under the folder structure of audio_files/multi_channel.
Although I had already written code to read audio from multiple microphones and save them to wav files, I rewrote the code for the following reasons:
audio_files/single_channel/output.wav
.audio_files/multi_channel
.