Closed StuartMellor closed 3 months ago
Have you tried the usecase 34 in the Readme.md. Below is the excerpt from that part of the readme.
You can send an audio stream with stereo channels either as content or through the main audio input.
Use the following setting to optimize the main audio input and output for an audio stream with stereo channels:
meetingSession.audioVideo.setAudioProfile(AudioProfile.fullbandMusicStereo());
Use the following setting to optimize the content share audio for an audio stream with stereo channels:
meetingSession.audioVideo.setContentAudioProfile(AudioProfile.fullbandMusicStereo());
The only limitation for stereo mic feeds currently is that you cannot use voice focus. If voice focus is enabled, it downmixes to mono before applying noise suppression.
Also note that both the sending and receiving side should be using the above audio profile setting that Akshay shared.
If you deploy the serverless demo included in the repo, you can test this out. On the join screen, click on "Additional Options", select "Set fullband music (stereo) quality", click save and then join the meeting. In the meeting, if you click on the microphone dropdown you should be able to see an option called "Prerecorded Speech (Stereo)". If you have another attendee join using the same configuration, you should be able to hear some test audio that has panning.
What are you trying to do?
I'm trying to figure out, when using a combination of this chime sdk and the react sdk, how to enable stereo audio feeds. I've set up a StereoPanner node and have confirmed that it is set up to accept either a mono or stereo media stream (from a device or elsewhere) and output two channels. Ideally, I'd love to process microphone inputs client side such that attendees are spread across a stereo audio plane (i.e. left to right).
I understand the dangers of this:
I've scoured the available documentation for information on enabling stereo and am finding it is undocumented.
How can the documentation be improved to help your use case?
It would be great to have some further documentation / examples on how to enable this feature! I understand that this likely involves a lot fiddly configuration with audio contexts but there is no indication of how sources are passed through chime and where it might be mixed down to mono.
What documentation have you looked at so far?