Open kpreid opened 9 years ago
This looks interesting - gstreamer > janus > webRTP streaming audio / video to browsers:
I've got quite a lot of experience with WebRTC between the browser and some custom Rust server-side code, as well as (separately from the WebRTC work) experience with Python and Twisted. I've got an RTL-SDR stick arriving tomorrow and would be interested in working on this once I've got everything up and running.
@jbg Great!
Since this would generally involve overhauling the audio bus code, I'll also note that for multi-session support we want there to be multiple audio buses rather than a single one (everyone listening to the same audio). Two ways this could be approached:
Client-managed audio bus:
Note that this has audio buses independent of session objects. In this design is that the client can control things in detail, which means that it's easier to make a really good UI, but things are overall more complicated.
Server-managed audio bus:
There is one audio bus per Session object. (Session objects are currently stateless proxies, but that's just because that was the minimal way to introduce them; the plan is to make them be collections of receivers and devices, potentially a subset of the ones Top
knows.)
The disadvantage of this design is that no two clients sharing a session can have different audio (except for a client-side mute/volume), but it means that the client needs to know fewer things about how audio works (pretty much as the current situation).
After having written all that up, I think I prefer the server-managed version as much simpler and not actually an obstacle to any future development (the hard part will be having multiple buses at all).
Right now, receive audio is delivered to the client as a custom protocol over WebSockets and then copied via JavaScript into Web Audio buffers. This is problematic in a number of ways:
The way to improve this situation that I see is to use WebRTC instead. I assume this would allow the client JS to get out of the way of the audio samples, improve latency, use UDP if possible, compress, etc.
However, most of the documentation I have found for WebRTC describes how to get two browsers to talk to each other, possibly with an off-the-shelf intermediary server. We need to find documentation of the actual VoIP protocol(s) used by WebRTC and figure out whether it would be practical to implement one in Twisted (or use an external process to handle it).