This PR is likely to affect the Flashlight integration work.
Previously, a controlling process, upon instantiating a Broflake client, would provide a list of STUN servers in the clientcore.WebRTCOptions struct. This is no longer the case.
Now the clientcore.WebRTCOptions struct takes a function which returns a "batch" of STUN servers, parameterized by STUNBatchSize.
We don't care how the function arrives at that batch of STUN servers. It may fetch them dynamically from a URL, evaluate a newly fetched list against some cached list, hardcode them in the function body, extract them from the global config, blend them from multiple lists collected in different ways, or employ some other strategy entirely.
In client.go, I wrote naive STUNBatch function logic which may be suitable for the MVP -- we fetch a publicly maintained list of ~500 fresh STUN servers and assemble a STUNBatchSize-sized subset at random.
Batch size is consequential because of the way Pion uses the STUN server array. At signaling time, Pion will attempt to hit all of the supplied STUN servers in parallel, racing the responses. If your batch size is too large, it can tax the client to the point where we exhaust the ICE gathering timeout and signaling fails.
In operation, this is how it all works:
In the worker's state 0, we construct a new RTCPeerConnection structure, which must be parameterized by an immutable list of STUN servers. At that time, we call out to the STUNBatch function to give us a batch of servers to use.
If the worker subsequently finds a peer to begin signaling with -- and it encounters the ICE candidate gathering error, indicating that none of the provided STUN servers worked -- it returns to state 0, where STUNBatch returns a new random batch of STUN servers for the next try.
Addresses https://github.com/getlantern/broflake/issues/4
This PR is likely to affect the Flashlight integration work.
Previously, a controlling process, upon instantiating a Broflake client, would provide a list of STUN servers in the
clientcore.WebRTCOptions
struct. This is no longer the case.Now the
clientcore.WebRTCOptions
struct takes a function which returns a "batch" of STUN servers, parameterized bySTUNBatchSize
.We don't care how the function arrives at that batch of STUN servers. It may fetch them dynamically from a URL, evaluate a newly fetched list against some cached list, hardcode them in the function body, extract them from the global config, blend them from multiple lists collected in different ways, or employ some other strategy entirely.
In
client.go
, I wrote naiveSTUNBatch
function logic which may be suitable for the MVP -- we fetch a publicly maintained list of ~500 fresh STUN servers and assemble aSTUNBatchSize
-sized subset at random.Batch size is consequential because of the way Pion uses the STUN server array. At signaling time, Pion will attempt to hit all of the supplied STUN servers in parallel, racing the responses. If your batch size is too large, it can tax the client to the point where we exhaust the ICE gathering timeout and signaling fails.
In operation, this is how it all works:
In the worker's state 0, we construct a new
RTCPeerConnection
structure, which must be parameterized by an immutable list of STUN servers. At that time, we call out to theSTUNBatch
function to give us a batch of servers to use.If the worker subsequently finds a peer to begin signaling with -- and it encounters the ICE candidate gathering error, indicating that none of the provided STUN servers worked -- it returns to state 0, where
STUNBatch
returns a new random batch of STUN servers for the next try.