WICG / pending-beacon

A better beaconing API
Other
43 stars 8 forks source link

Migrated: Any facility for POST body compression? #56

Open clelland opened 1 year ago

clelland commented 1 year ago

From https://github.com/clelland/page-unload-beacon/issues/1:

Today, large beacon bodies are common, and compressing them is annoyingly hard. Could this be fixed here?

@ericlaw1979

clelland commented 1 year ago

So PendingBeacon can take a ReadableStream as input, which should plug directly in to the output side of a CompressionStream. That would mean sending your data into the CompressionStream, and then feeding that stream into setData, which would read and then buffer the compressed data for later sending.

Implementation-wise, I think this would mean doing the compression on the renderer side. I don't know if there is any increased danger of data loss by moving to something less synchronous than just sending in a byte array. (e.g., if there would have been time before a renderer crash to send the raw data, but not to wait for the async compression to finish.)

clelland commented 1 year ago

It occurs to me after writing that that since this would mean that the HTTP stream wouldn't have compression applied as a transport encoding, but would be a regular post whose payload happens to be a compressed byte stream encoded in JSON -- that we'd have to make sure that character escaping doesn't interfere with the goals of compression here.

yoavweiss commented 1 year ago

/cc @nicjansma @ksylor

yoavweiss commented 1 year ago

An alternative here would be browser-side compression of the entire body before sending. The downside of that approach would be that there's no way for the browser to negotiate compression formats with the server, so the browser would have to trust the app that the server we're sending the data to can handle the requested compression format.

ksylor commented 1 year ago

@yoavweiss @clelland Could there be an optional parameter where the acceptable compression format(s) are passed into the options object, and if none are specified, the payload is not compressed? Also curious if this could be an option for GET as well as POST beacons?

yoavweiss commented 1 year ago

I'm curious RE the use case for GET compression. Presumably whatever compression scheme we'd end up with would require backend work to enable the end point to decompress the data. Given that, any reasons why the payload can't be via a POST?

Also, one more complication: there's a good chance that arbitrary POST compression would mean that the request is not CORS-safe and hence would require a preflight. I'm not sure that's awful.

ksylor commented 1 year ago

I don't have a use case for GET compression @yoavweiss was just curious why this was specifically for POST and not both

tunetheweb commented 1 year ago

I don't have a use case for GET compression @yoavweiss was just curious why this was specifically for POST and not both

Headers (including the URL) are automatically compressed for HTTP/2 and HTTP/3 (and can't be compressed for HTTP/1.1) so this is mostly about compressing the body in a large POST message.

yoavweiss commented 1 year ago

Headers (including the URL) are automatically compressed for HTTP/2 and HTTP/3

I wouldn't rely on HPACK and QPACK doing a good job at compressing one time arbitrary payloads here. They're good at compressing repeating header names or values, and beacon payloads won't be a good fit for them. Generally, I think we should steer people towards POST based beacons.

nicjansma commented 1 year ago

Also see https://github.com/w3c/beacon/issues/72 for discussions on this in the context of sendBeacon().

As @ksylor mentions, for RUM I would also prefer a mechanism to specify desired compression encodings that could be applied as-supported pre-beaconing (but not every time I setData()).

For some RUM use cases, I think we'll be adding/updating content to a PendingBeacon multiple times prior to the page unload/beaconing, and it would be ideal to not spend the cost of compression on a stream until the very "final" payload gets sent.

RE: this triggering a CORS preflight, it's not awful but could increase the likelihood of beacons never arriving (not due to preflight denial, but due to the latency it adds).

fergald commented 1 year ago

@nicjansma Perhaps we could do the pre-flight at the time the beacon is created.

yoavweiss commented 1 year ago

So, when talking about a Fetch-based API, I think we have 2 options here: 1) The API will accept a CompressionStream, and compression would happen when the fetch is emitted. Any future update to the beacon would trigger re-compression of the data (maybe we can avoid that by recycling the CompressionStream for additions, but not for modifications). 2) We'd rely on Content-Encoding on the request side and Accept-Encoding on the response side to perform the compression on the browser side. As @fergald said, we can send an empty request at the beacon creation time to test the server's ability to handle Content-Encoding. This option will not require re-compression for beacon changes.

yoavweiss commented 1 year ago

At the same time, for (2) it's be a bit harder to integrate post-compression quota.

yoavweiss commented 7 months ago

A third option would be for the compression to happen on the browser side, as an opt-in from the user. (e.g. add compression: gz field to the RequestInit)