pion / ice

A Go implementation of ICE
https://pion.ly/
MIT License
444 stars 158 forks source link

Continued TCP-ICE Improvements #245

Open Sean-Der opened 4 years ago

Sean-Der commented 4 years ago

These are a list of all the things we can do to continue improving the tcp-ice implementation that landed.

jeremija commented 4 years ago

I think the spec is also pretty specific about some situations when the inbound connections need to be dropped. I think currently we'll accept any TCP connection with the right ufrag, but when receiving active (inbound) TCP connections, we should only accept the ones for which we already have candidates with the IP address (TCP port is irrelevant in this case because browsers will just set it to 9).

We shouldn't drop the TCP connection during ICE restarts so that's another thing to think about.

Also, I think the way priorities are calculated might need to be changed for TCP candidates.

jeremija commented 4 years ago

We should also provide a way to figure out which network type is currently in use by an ICE Agent. For example, there's no need for having a Jitter Buffer when TCP is in use.

Perhaps there could be a function:

func (a *Agent) SelectedNetworkType() (t NetworkType, ok bool` {
  pair := a.getSelectedPair()
  if pair == nil {
    return t, false
  }

  return pair.local.NetworkType(), true
}

And then we could expose this information in

func (pc *PeerConnection) SelectedNetworkType() (t NetworkType, ok bool) {
  nt, ok := pc.iceTransport.gatherer.getAgent().SelectedNetworkType()
  if !ok {
    return t, ok
  }

  t, err := NewNetworkType(nt)
  return t, err != nil
}
jech commented 4 years ago

func (pc *PeerConnection) SelectedNetworkType() (t NetworkType, ok bool) {

How would the client know when this information becomes invalid and must be recomputed? Can it only change when ICE transitions to the failed state, or can it change at any time?

(I agree that this kind of information is useful, for example I'd like to ignore NACK requests on TCP connections since the data will get resent by the transport anyway. OTOH, I'd argue that you still need a jitter buffer with TCP. Due to retransmissions and head-of-line blocking, the amount of jitter is higher with TCP than with UDP. And while TCP will not reorder packets, there's still the possibility of an intermediary that does reordering. Case in point, an SFU might be proxying from UDP to TCP, so the traffic carried on the TCP connection is already reordered due to the UDP leg. Sorry for the digression.)

jeremija commented 4 years ago

Those are all very good points!

How would the client know when this information becomes invalid and must be recomputed? Can it only change when ICE transitions to the failed state, or can it change at any time?

This is what I'm not sure about. If it only changes on ICE state changes, perhaps it's better to convey this information via the OnICEConnectionStateChange?

jech commented 4 years ago

It's easier for the client to query the sender/receiver/transceiver than to keep track of the value last passed to the callback. This is analoguous to the ICE connection state, which is passed to the callback but also available directly from a PeerConnection method.

dgunay commented 3 years ago

@Sean-Der I am unblocked on my issue (was able to get Wowza to enable UDP ICE for me), but seeing as it's October and I try and do Hacktoberfest every year, this seems like a great place to contribute. I'd like to try and help implement active TCP-ICE.

I've only been grappling with low level WebRTC stuff for a few weeks now so I'm not sure where to get started. If you have any guidance on parts of the code I should familiarize myself with before breaking ground, it would be much appreciated.

And thanks for spearheading this project by the way, WebRTC looks scary and complicated from my perspective as an outsider to the technology but your efforts and attitude are super inspiring.

Sean-Der commented 3 years ago

@dgunay that is great news!

I think the best place to start would be Active TCP Candidates. Simultaneous Open is something we could add later.

I would start with just opening a TCP Socket in addRemoteCandidate and connects to the other side. The full version will generate a local TCP Candidate always, but only give it a socket and dial when a remote exists.

TestTCPMux_Recv is a test that shows the listening side. You can copy this and then implement the connecting side.

Sean-Der commented 3 years ago

Fire any/all questions at me in Slack. Would love to help you get started :)