Open pthatcher opened 1 year ago
I wrote the blog post in 2018, based on the presentation and discussions at the time. Happy to dig up more notes if needed. Perhaps @pthatcher already knows the proposal.
I did have similar questions on what the difference between a ice-controller+ application versus having full control over the ice-transport.
If you could find my old proposal or slides, that would be amazing. I searched the w3c mailing list and all I could find was: https://lists.w3.org/Archives/Public/public-webrtc/2018Jun/0083.html.
Slides and recording can be found here, we talked about this on the first day https://www.w3.org/2011/04/webrtc/wiki/June_19-20_2018 Bernard's intro of your session starts here
Thank you for sharing the references and for the comments.
The ICE controller proposal does overlap somewhat with FlexICE, but tackles a smaller problem, and perhaps takes a more incremental approach to it.
Having said that, a separate interface is not incompatible with FlexICE and could be taken as an argument to the RTCIceTransport constructor. This could offload some of the RTCIceTransport extensions in FlexiCE to the ICE controller instead.
candidatepairupdated
is fired for response received, receive timeout, etc. Perhaps this needs to be stated more clearly.
Ah yes, I've missed a check for the ICE role in the validation step. It might be interesting to send a nomination regardless. A successful nomination will result in a subsequent candidatepairswitched
event so there is feedback if the nomination fails.
Ack, makes sense to add this.
There has been another request (#5) to allow continual gathering, which native webrtc already supports. Do you have any thoughts about continual gathering vs. regather? What happens to existing server-reflexive and relay candidates when regather is called? Or is the main intent of regather to discover new network interfaces (which could be achieved with continual gathering)?
Gathering has been somewhat out of scope in the initial proposal, but it would be useful to extend here.
- The main reason for a separate RTCIceController interface and extending RTCConfiguration is timing, i.e. to allow the app to insert itself into ICE before the user agent starts making decisions. setLocalDescription leads to both creation of RTCIceTransport and beginning of ICE. Attaching to RTCIceTransport after this point might be too late for the app to be in full control over ICE.
It shouldn't be too late. transceiver.transport
and transceiver.transport.iceTransport
are created on the queued task that releases early candidates (queued in tasks after this one) and resolves the sLD promise: "let transport be a newly created RTCDtlsTransport object with a new underlying RTCIceTransport."
This and the JS event model guarantees no ICE events are surfaced to JS until the following code has run to completion:
const pc = new RTCPeerConnection({iceServers});
const transceiver = pc.addTransceiver("video");
await pc.setLocalDescription();
pc.onicecandidate = () => { /* guaranteed to catch all candidates */ };
console.log(transceiver.sender.transport.iceTransport.state); // guaranteed to be "new"
transceiver.sender.transport.iceTransport.onstate = () => { /* guaranteed to catch all state transitions */ }
I'd like to understand if this is good enough for RTCIceTransport why it's not good enough for this new API.
Specifically, which user agent "decisions" necessitate a new API to intercept? It's true an ICE agent may have already started STUN checks against the (optional¹) ICE servers list, but any JS API to intercept this would necessarily get the same JS event loop guarantees.²
Unless there's a compelling reason, I think this belongs on RTCIceTransport.
1. E.g. one way to prevent the ICE agent from beginning on the iceServers list would be to not provide one. 2. The ICE Agent may of course start way sooner with prefetch and pooling, so I'm not opposed to direct upfront configuration settings where they makes sense. But it's not necessary to intercept events this early.
If the application is able to attach listeners to RTCIceTransport
before the ICE agent has had a chance to prune away any candidate pairs, the API can go on RTCIceTransport
. Indeed, it makes things much easier, not least by removing the limitation to max-bundle
.
On the offerer, this is possible by intercepting before setRemoteDescription
.
On the answerer, setRemoteDescription
followed by setLocalDescription
creates RTCIceTransprt
object(s) and queues the task that releases early candidates. If I understand correctly, the JS event model guarantees that listeners can be attached safely at this point:
const pc = new RTCPeerConnection({iceServers});
await pc.setRemoteDescription(offer);
pc.onicecandidate = () => { /* guaranteed to catch all candidates */ };
await pc.setLocalDescription();
pc.getTransceivers().forEach((transceiver) => {
console.log(transceiver.receiver.transport.iceTransport.state); // guaranteed to be "new"
transceiver.receiver.transport.iceTransport.onicepruneproposed = (event) => {
event.preventDefault(); /* guaranteed to prevent all prune proposals */
}
});
While the ICE agent is free to start ICE checks after setLocalDescription
, the event mode guarantees that the application must get a chance to respond to any prune proposals before pruning can occur.
If all of this is correct, then yes, it is preferable to add this as an RTCIceTransport
extension.
I have my qualms about actually acquiring the RTCIceTransport
object (eg. transceiver.sender.transport.iceTransport
), but it is workable.
I proposed something a long time ago which I called FlexICE. I couldn't find my slides from years ago, but I found some article on the web which had some details (https://www.callstats.io/blog/2018/06/22/the-evolution-of-the-webrtc-ice-api). Comparing those notes with what you have I see a lot of overlap, which is good. I think low-level ICE control is a great idea.
have some thoughts from comparing the two: