Open jimmywarting opened 2 years ago
Is this a request for server implementations of fetch()
? Seems better suited for the Winter CG.
yea pretty much... thought i would bring it up here first doe...
I think we'd need some pretty compelling use cases to consider this as it would be somewhat non-trivial to do as I understand it.
Here is a compelling case for a Chrome extension:
Another use case could be to do partial download of something that's encoded and also supports range request.
Say that i want to download something really large. content-encoding
and content-length
is provided along with accept range response.
I initiate a call
const response = await fetch(url, {
method: 'GET',
raw: true,
headers: {
'accept-encoding': 'gzip, deflate'
}
})
From now on i will know
content-length
but i will not know what the actual data is unless i pipe it to a new DecompressionStream('gzip | deflate')
const progress = document.querySelector('progress')
const chunks = [] // ultra simple store
for await (const rawChunk of response.body) {
// show how much have been downloaded (not how much have been decompressed)
progress.value += rawChunk.byteLength
// store the chunks somewhere
chunks.push(rawChunk)
}
With this in place i can provide a good solution for failed downloads. By calculating exactly how much i have downloaded. That way i can make i range request and continue on from where i left of or when the connection failed. This would also be a good solution for pausing / resuming a download.
now that i have all chunks then i can go ahead and decompress it using the DecompressionStream
unfortunately we lose some very useful stuff with this raw option. can't use brotli decoding (due to lack of support in decompressionStream) text()
, json()
, arrayBuffer()
and response.body
are not so useful anymore cuz it require more work afterwards.
another option would be to be able to hook in and inspect the data somehow before it's decompressed. so a alternative solution could be to do something like
const response = await fetch(url, {
onRawData (chunk) {
// ...
}
})
// alternative considerations
const response = await fetch(url)
const clone = response.clone()
response.json().then(done, fail)
clone.rawBody.pipeTrough(monitor).pipeTo(storage) // consuming the rawBody makes `clone.body` locked and unusable.
So i can say that i found two additional use cases beside a server proxy. 1) progress monitoring, 2) pausable / resumable download
One other use case: If I wish for an application to cache the compressed response data with a custom storage layer, it would have been convenient if the application could take the data as encoded by the server and push it into the cache directly. At the moment, one can only grab the data in its decompressed form, which would either waste space in the cache or require the application to re-compress it.
- We have done this successfully with XMLHttpRequest and this works for many scenarios, but now service workers are required, thus Fetch is required.
Out of curiosity, how did you do it with XMLHttpRequest? I didn't think that was possible.
There is a need for proxy servers to simply pass data forward from A -> B without decompressing the data as it would invalidate the
content-length
andcontent-encoding
(Like a CORS proxy server for instance)So we need an option to disable transformation.