Open domenic opened 6 years ago
This would mean changing the calculation of desiredSize from being directly based on the queue to being tracked separately. This seems reasonably achievable, but less elegant.
I'm concerned that callers who are looking at desiredSize to determine write size will end up doing tiny writes as they see desiredSize decrement by small amounts. This can be a well-known anti-pattern in networking: http://www.tcpipguide.com/free/t_TCPSillyWindowSyndromeandChangesTotheSlidingWindow.htm
Certainly, silly window syndrome is an anti-pattern. But the jumps from full to empty and back again are an anti-pattern as well (for TCP and SCTP, the congestion window can reset and slow start is initiated). The word desired from desiredSize
implies to me that this is the recommended chunk size the controller wants to see and that implies that the chunk shouldn't be too large and shouldn't be too small. I'm not sure it's a good idea to leave this task up to the application or other controllers (e.g. by doing Math.min(writer.desiredSize, preferredChunkSize, remainingBytes)
.
As an idea: We could add a more fine granular progress feedback as mentioned by @domenic and couple it with requiring to provide a lowWaterMark
. So, .ready
would not resolve before .desiredSize >= .lowWaterMark
.
This now sounds related to a general problem with desiredSize
: often you don't want any extra buffering, but you do want to receive writes of some ideal size. This is a particular issue with pipes, where you'd like the amount that is read from the start of the pipe to reflect the amount of data that the end of pipe would like to consume, but there's currently no way for that information to get through the intervening transform streams.
I am contemplating some kind of "pass-through desiredSize" mode to address this issue. I haven't worked out a concrete design, but it looks like it could also address the use case of an underlying sink that wants to update desiredSize
at finer granularity than the input chunks.
The idea of delaying .ready
until a certain amount of space is available seems broadly similar to #493.
The specification currently states
The
WritableStream()
constructor accepts as its first argument a JavaScript object representing the underlying sink.
We can define a method at the JavaScript object passed to WritableStream()
constructor
let WSController = class {
constructor(/* data */) {
this.bytesSoFar = 0;
}
progress(controller) {
console.log(controller, this.bytesSoFar);
}
start(controller) {
console.log(controller);
}
write(data, controller) {
this.progress(controller, ++this.bytesSoFar);
}
}
..
let wscontroller = new WSController();
let writableStream = new WritableStream(wscontroller);
Two additional alternative approaches
CustomEvent
with type set to "progress"
, dispatch the event with bytesSoFar
and resolve
argument of Promise
executor set at detail
property of .dispatchEvent()
; which provides a means to halt further write (or read) until the Promise
is fulfilled;EventSource
can be used as a persistent streaming connection (until closed) to get the bytes received at or sent from the remote connection.
@lgrahl brought up in an offline conversation that the way that a writer's desiredSize jumps in response to the underlying sink's write() promise fulfilling can be problematic for large chunk sizes. He thought it would be nicer if there were a way to signal write progress along the way, thus allowing the producer to get a notification that they can write more bytes sooner, rather than later.
Concretely, you can imagine something like
This seems a little fragile and tricky; e.g. how does it interact with the size of the chunk as computed by the queuing strategy; having to deal with this new controller method being called at inappropriate times; giving it a good name; etc. But you at least get the idea.
I think the next step hear is learning more about systems where this would make sense. The underlying sinks I'm most familiar with don't have this progress-reporting capability. Hopefully @lgrahl can weigh in.