Open ricea opened 4 years ago
left a ball of roller coster and created more issues thats need to be fixed, don't sound like an easy task to write spec and working cross multiple threads and pipes 😅
so here is rough workaround for those who stubble up on this issue and seeks for a solution that works right now
I think the proper model here is that transferring a ReadableStream transfers the ability to read from the stream - the end that writes to the stream is not transferred, and the connection between the writing end and the reading end remains in place.
Isn't this the same model as for message channesl?
I think the proper model here is that transferring a ReadableStream transfers the ability to read from the stream - the end that writes to the stream is not transferred, and the connection between the writing end and the reading end remains in place.
That's the idea, yes.
The problem is mostly technical: we have to figure out how to make that work. There's a whole discussion on the original issue about all the peculiarities we have to deal with, such as:
[[state]]
, [[storedError]]
, [[disturbed]]
,...).WritableStream
may still have chunks in its queue which have not yet been processed by the "cross-realm sink". If the stream is transferred, then we must also transfer this queue, so we don't lose those chunks. (I suppose the same applies to a queued close or abort request.)ReadableStream
may have already received chunks from the "cross-realm source" and put them in its own stream.[[controller]].[[queue]]
to be read later. Again, if the stream is transferred, then we must also transfer this queue. (I guess [[closeRequested]]
also needs to be transferred?)
[[queue]]
of a cross-realm readable stream is always empty.writer.write()
, writer.close()
and writer.abort()
return a promise that resolves when the underlying sink has processed the requested operation. However, if we transfer the WritableStream
while those promises are still pending, then we lose access to the message port, so we can no longer communicate with the "real" underlying sink on the other end. We still need to figure out what should happen with these pending promises. Do we leave them pending forever? Resolve them, assuming that "everything will be fine"? Or maybe reject with an error to indicate that the stream was transferred?Here's a sketch of an approach which reconciles the atomic nature of transfer with the asynchronous nature of streams.
I'm going to talk about the WritableStream case because I think it is the harder of the two.
After step 9, realm A has been "unhooked" and can safely be destroyed.
There can be an arbitrary delay for queued writes to complete before A is "unhooked". Maybe we can force the queued chunks from A into O's queue by ignoring backpressure to make this delay as short as possible?
There can be an arbitrary delay for queued writes to complete before A is "unhooked".
That is an ergonomic issue: one calls postMessage, everything is fine so one navigates away thinking everything is good, but too quickly so that the unhooking does not happen. The current spec approach stays on the safe side so that there is no surprise.
Hmm, delaying the transfer of the queued chunks is indeed quite risky.
While the proposed solution could work for WritableStream
, I'm not so sure it'd work for a ReadableStream
. The current spec tries to avoid sending chunks from the original stream to the transferred stream that are not yet being requested, so ideally the transferred stream's queue is always empty. However, other spec changes might make it possible for that queue to become non-empty. For example, with #1103, you might do this:
const controller = new AbortController();
const reader = readable.getReader({ signal: controller.signal });
reader.read(); // causes the cross-realm readable to send a "pull" message
controller.abort(); // discards the pending read request
// At this point, we have no pending read requests, but we are already pulling...
// After some time, we receive a "chunk" message and put the chunk in the queue.
// Which means that if you now transfer the stream...
worker.postMessage(readable, { transfer: [readable] });
// ...we have to do something with the chunks in the queue first.
I think it's better if we transfer the entire queue synchronously as part of the transfer steps. In the transfer-receiving steps, we would re-enqueue those chunks with controller.enqueue()
and writer.write()
.
- We give up the nice abstraction of the "cross-realm identity transform" because it makes the following stuff harder. Instead we have separate transfer logic for ReadableStream and WritableStream.
So the transfer steps for ReadableStream
would acquire a reader, and for WritableStream
they would acquire a writer? I think I would like that better than the current solution with pipeTo()
, actually. 😛
After step 9, realm A has been "unhooked" and can safely be destroyed.
Is step 9 needed? Can't A close itself immediately after step 7?
The original transferable streams issue has been closed now that support has been landed, however the discussion of the double transfer problem that started at https://github.com/whatwg/streams/issues/276#issuecomment-482797085 and consumed the rest of the thread is not concluded.
Summarising the issue, the following code works fine:
However, if you subsequently run
worker.terminate()
thentransferred_rs
will start returning errors. For other transferable types, no connection remains with any previous realms that the object was passed through, but in the case of streams, data is still being proxied via the worker.See the linked thread for why this is hard to fix.