Hello, this is my first time working with audio so I'm probably missing something.
I have a model to predict guitar chords, and I'm implementing a simple streamlit dashboard to record and send these audios for prediction. This is the code I'm using, based on this repo:
`wav_audio_data = st_audiorec()
if wav_audio_data is not None:
audio = st.audio(wav_audio_data, format='audio/wav')
Is it possible to directly retrieve the audio in a numpy array? I realize the 'audio' in the code is a DeltaGenerator object, but I don't really know how to use it. So I used this np.frombuffer on the wav_audio_data, but I'm not sure if this is appropriate.
Is it possible to increase the quality of the recorded audio? When I record something on my computer I have a clear sound, but when I record in the dashboard, I have a low-quality audio.
Hello, this is my first time working with audio so I'm probably missing something. I have a model to predict guitar chords, and I'm implementing a simple streamlit dashboard to record and send these audios for prediction. This is the code I'm using, based on this repo:
`wav_audio_data = st_audiorec() if wav_audio_data is not None: audio = st.audio(wav_audio_data, format='audio/wav')
My questions:
Thank you in advance