Open noamr opened 1 year ago
I think we need a more holistic approach here.
HTMLImageElement.decode() is not protected by neither TAO nor CORS. Talking to @mmocny, there are potentially other ways to weed out the decode times of opaque images.
We need to collectively decide if that information should be protected, and if so, by what means. We also need to ensure those means are ergonomic and easy to use, either through document-level CORS-fetch opt-ins for subresources, through public resource declaration, both, or something else entirely.
/cc @clamy @mikewest @annevk @arturjanc
I think we need a more holistic approach here.
HTMLImageElement.decode() is not protected by neither TAO nor CORS. Talking to @mmocny, there are potentially other ways to weed out the decode times of opaque images.
We need to collectively decide if that information should be protected, and if so, by what means. We also need to ensure those means are ergonomic and easy to use, either through document-level CORS-fetch opt-ins for subresources, through public resource declaration, both, or something else entirely.
/cc @clamy @mikewest @annevk @arturjanc
I agree about finding a more holistic approach. The first rule of thumb that exists today though in the HTML/Fetch specs, is that TAO can only protect the timing of a fetch and not of something to do with the resource's content, which is generally protected by CORS. Using TAO to protect render time breaks this, and makes it so that we need to save TAO information in HTTP cache and other repercussions.
For those not up on their acronyms, TAO = Timing-Allow-Origin
At a high level we shouldn't add to the number of cross-origin channels without opt-in. That includes decode()
. Given that it returns a promise it still seems somewhat malleable, but I can't find an existing issue targeting that. (There are some other issues with it though that suggest it's not being maintained.)
I also like the idea of not expanding the scope of the Timing-Allow-Origin
header beyond fetch timing data.
@mmocny isn't the renderTime
observable-ish today for no-cors images, by doing something like the following?
await new Promise(resolve => img.addEventListener("load", resolve));
// start time
await new Promise(requestAnimationFrame);
// end time
You ask if renderTime
is "observable-ish"... I would say no.
Measuring next rAF after image "load" event is not the same as renderTime
[1].
Since you mention no-cors (rather, non-TAO today), I think you are asking if you can work around the limitations of LCP only reporting loadTime
? Well, I don't think your polyfill suffices, as that only (somewhat) measure "paint" stage on main [2].
However, you can:
decode()
to wait for no-cors image to be ready (as mentioned already), and then also use any strategy to get render time of next frame: rAF for paint time, element timing for renderTime (you would have to measure something that passes tao/cors, like text).[1]: That's because requestAnimationFrame
does not measure renderTime
. On Chromium, "renderTime" is after image decode (typically) and represents frame presentation time, while rAF measures only a much earlier stage on main thread and before (final) decode.
On FF/Safari, there is no "LCP.renderTime", but we can use "FCP.startTime" as a comparison. While I think the timing is equivalent(ish?) to next-rAF-after-load, the frame it measures comes before image decode, AFAIK, and so does not represent the "LCP.renderTime" concept.
See: https://github.com/web-platform-tests/interop/issues/215
[2] One small caveat regarding your snippet is that image "load" event may not get scheduled until after the next animation frame:
requestAnimationFrame
after "load" will often be off by at least one frame. If main thread has many tasks queued, this gap can be wider.At least that's what I've observed in Chromium.
However, a more robust strategy you can use is rAF polling and checking the status of image.complete, since this appears to get marked in the right frame.
const img = document.querySelector('img');
img.addEventListener("load", (event) => {
console.log("image load", performance.now(), event);
});
function rafPoll() {
requestAnimationFrame((frameTime) => {
const nowTime = performance.now();
// This may come before img.onload event fires
if (img.complete) {
console.log('image complete', frameTime, nowTime);
} else {
rafPoll();
}
});
}
// Start polling after image begins loading
rafPoll();
...again, all that just measures loading + scheduling paint stage, and not decoding + presentation.
I understand those timings are slightly different, I wonder if the difference between the polyfill and renderTime is significant in terms of understanding hidden information about the image.
Let's say it's an SVG, and a logged-in user would get a really complex SVG vs. a much simpler one for non logged-in users.
At a high level we shouldn't add to the number of cross-origin channels without opt-in. That includes
decode()
. Given that it returns a promise it still seems somewhat malleable, but I can't find an existing issue targeting that. (There are some other issues with it though that suggest it's not being maintained.)I also like the idea of not expanding the scope of the
Timing-Allow-Origin
header beyond fetch timing data.
Created an issue for decode
: https://github.com/whatwg/html/issues/8709
Note that HTMLImgElement.decode()
does not work on SVG elements.
Not sure if canvas drawing being synchronous is something web devs can gather rendering information from.
Also createImageBitmap() can expose render time of cross-origin images.
The intention in
Timing-Allow-Origin
(TAO) is to protect the timing it took to fetch the image. Not the content of the image. The intention in protectingrenderTime
is to avoid exposing something about the content of the image.With CORS images, one can anyway know everything about the image and doesn't need to decipher the content from render time. While with TAO, it's the only place where it's used outside of the context of protecting fetch information.