Having the internals of the WebRTC library do audio handling is unfamiliar to me. [1] We’re using SIP.JS which expects implementors to create their own MediaStream with the PeerConnection tracks and render the audio themselves. This gives us extreme control over mixing and adjusting levels.
Is it possible to get direct access to the media and render it ourselves, with something like npm-speaker?
I asked this question to the community forum first and I'm hoping to get more traction here.
Having the internals of the WebRTC library do audio handling is unfamiliar to me. [1] We’re using SIP.JS which expects implementors to create their own MediaStream with the PeerConnection tracks and render the audio themselves. This gives us extreme control over mixing and adjusting levels. Is it possible to get direct access to the media and render it ourselves, with something like npm-speaker?
1 https://github.com/react-native-webrtc/react-native-webrtc/blob/master/Documentation/BasicUsage.md#controlling-remote-audio-tracks