Open anacrolix opened 4 years ago
Hi @anacrolix Unfortunately, pion/sctp does not support EOR just yet. The application-level message size needs to be less than 64KB.
SCTP fragments a message into smaller chunks (MTU ~1280 bytes) to avoid fragmentation at IP layer, but it still has a hard limit of 64KB to prevent one large message from occupying the available bandwidth.
I'm hoping to get on it to support large messages as soon as I can.
@enobufs Thanks. I'm using a detached datachannel, which offers the io.Writer interface. By detaching the datachannel, my assumption was that I'm now streaming bytes, and how my writes are packetized is irrelevant. Should the detached datachannel wrapper break up writes that exceed the underlying transports supported message size? Both sender and receiver in this implementation don't care how the bytes are grouped at this point.
If this behaviour isn't intended by the datachannel type, is the maximum message write size exposed somewhere so I can create such a wrapper?
@Sean-Der @backkem I realize I may not fully be aware of the intention behind the invention of detached datachannel being a ReaderWriterCloser.
Standard datachannel is a "message-oriendted" protocol, and NOT "stream-oriented". SCTP tries to retain the beginning and end of each message. If the intention of the detached datachannel to be "stream-oriented", then we'd probably need to do someting about it (adding another layer), for instance:
But then, if we do this, the SCTP's partial reliability option is useless in this mode.
Any thoughts?
You are right, the detached data channel is a io.ReadWriteCloser and doesn't do any packetizing. We use the io.ReadWriteCloser for packet based communication in many places the stack (it is basially one giant Matryoshka doll of io.ReadWriteClosers). It isn't really intended to be stream-oriented. I'm just not sure what other interface we should use for connection based, packetized communication. The net package had the PacketConn but that interface makes more sense in a connection-less case since you provide the destination on every write. I created the detached data channel so I didn't have to work with the WebRTC API's callback based API. Not to provide a stream-oriented interface.
Adding our own packetizer on top would mean we introduce our own behavior, which is not part of the WebRTC/Data channel spec. Sending giant packets, while probably unwise, is just fine according to the spec. In addition, you need to take into account that the other side of the communication, which is potentially the browser, will have to implement the inverse behavior. E.g.: if you split a stream, you'll get multiple OnMessage calls on the browser side. You'll have to account for this, we better not hide that in our implementation. Therefore, i'd prefer to leave packetization to the consumer of the library, even when using detached datachannels.
At some point I was wondering to create a copy of the io.ReadWriteCloser under a different name to signal the difference (stream vs packet based). Maybe that's an option? It could be used across all Pion code..
Thanks @bakkem for the clarification. I saw one person trying to use io.Copy with the detached data channel experiencing a stall during data transfer. The cause was io.Copy which fed a buffer to dc.Read() with length of 8KB where the data channel had a larger packet sitting in the reassembly queue, causing io.ErrShortBuffer. So, there is a real case where the packet-based communication and streaming i/o tools wouldn't go along together. I agree, faking a stream behavior may be problematic on many levels, the users would need to handle the issue and glue them together.
@anacrolix
If this behaviour isn't intended by the datachannel type, is the maximum message write size exposed somewhere so I can create such a wrapper?
As discussed above, the data channel (even it is detached), is not a stream transfer (and never intended to be a stream), so you will need to have some workarounds to it, at both ends of the data channel. Here's my (obvious) suggestion:
Max size for write is hard-coded as 65535. If the data channel is detached, the buffer you will pass to dc.Read() would directly be accessed by SCTP layer, and if that is shorter than the message waiting to be read, it will return io.ErrShortBuffer, and the data is discarded.
Hope this helps!
@enobufs @backkem Thanks for the feedback. It would seem to me that overloading the existing io.Writer
and io.Reader
interfaces may be an unfortunate design decision, as those interfaces do have contracts that assume stream behaviour. You mentioned that this is already pervasive throughout the pion codebases: Perhaps a new interface like type MessageWriter interface { WriteMessage... }
, type MessageReader interface ...
could be used?
Is there somewhere I can obtain the maximum outbound message size for a given write? On a Read, as you mentioned, I could handle ErrShortBuffer, but the data is discarded, so I would need something similar there too?
What is the purpose of the Stream.packetize
method? Is that breaking a message into chunks to go over something like UDP? Do the payloads reassemble into a message at the other end?
Is there any ETA on supporting EOR?
Is there any ETA on supporting EOR?
Last time I checked this doesn't apply to us. EOR is a flag used by the kernel-level SCTP API. pion/webrtc uses its own userland SCTP implementation, with its own Go-centric API, where this EOR flag doesn't exist.
I just want to check in on this again. Has there been any change? I have a long-lived workaround here that I believe is contributing to another issue in anacrolix/torrent.
There is a small writeup about this here https://blog.mozilla.org/webrtc/large-data-channel-messages/
We need to parse the SDP, and then determine if the remote supports this.
We also need to update our code/SDP to declare to others we support it.
The workaround I used in anacrolix/torrent for this may be contributing to https://github.com/anacrolix/torrent/issues/982.
Doing a
DataChannel.Write
can generate the errorOutbound packet larger than maximum message size 65535
for large slices. See https://gophers.slack.com/archives/CAK2124AG/p1588493648338600.