Closed Offroaders123 closed 1 year ago
The promise returned by write()
doesn't resolve until the chunk is actually decompressed, which doesn't happen until something reads from the readable side. Since using await
prevents the function from reaching the reader.read()
line, nothing ever reads, and the function cannot make progress. It's stuck waiting forever for something to read.
The behaviour of Chrome, Safari and Firefox is correct.
I discovered that the methods for
WritableStreamDefaultWriter
(at least when used withCompressionStream
andDecompressionStream
) don't properly resolve the same across various implementations. I was wondering if there is a defined standard behavior as to when they should return? My discoveries for this error were tracked along this issue here.As mentioned in the WICG draft report, you can use the Compression Streams API with
ArrayBuffer
object by using aWritableStreamDefaultWriter
, making use of thewriter.write()
andwriter.close()
methods to chunk in parts of theArrayBuffer
.These two methods each return
Promise
s however, so I thought it would make sense to use them withawait
to ensure that any errors that happen with their calls could be bubbled back up to the mainasync function
implementation. This doesn't appear to work in all platform implementations though. Adding theseawait
calls works correctly in Node.js' implementation, but not in Chrome, Safari, nor Firefox's implementations.Using
writer.write()
andwriter.close()
withawait
will never resolve in all three browsers, while it will resolve properly/timely withundefined
in Node.js. Is this because Node.js has a unique stream implementation to that of the browsers? I don't see why these never resolve in the browsers.The interesting part is that they all do throw errors (where applicable, say if the data was decompressed using an incompatible decompression format that doesn't match the format the
ArrayBuffer
uses).So without being able to use
await
calls on these methods, you have to manually catch the errors with your own.catch()
handlers, which feels similar to that of what you have to do when usingfor await
loops, which Jake Archibald covered in an article on his blog.I mainly discovered this in my own project, where I am using the Compression Streams API to discern the compression format of a given
ArrayBuffer
file, by the use of nestedtry
-catch
statements to find out what the file was or wasn't compressed with (Example code, similar to that of my project).