w3c / resource-timing

Resource Timing
https://w3c.github.io/resource-timing/
Other
120 stars 35 forks source link

Resource processing time #133

Open yoavweiss opened 6 years ago

yoavweiss commented 6 years ago

While talking to people about visual metrics and the reasons it's currently hard to get them from RUM, a recurrent theme is that knowing the resources processing time would be useful for understanding their visual impact.

Use cases I encountered so far:

Does it make sense to add that to Resource Timing? Or should we try to split it out into a separate spec that hooks into Resource Timing, in a similar way to Server Timing?

tdresser commented 6 years ago

I'm hoping that the Element Timing API will, over time, be extended to handle the first two cases.

When should font processing time end? If this is hung off of Resource Timing, it seems a bit weird to me to measure until it's displayed, but it's quite natural as part of Element Timing.

In theory, long tasks V2 will handle the JS case, if we can sort out attribution.

nicjansma commented 6 years ago

For JS processing (parsing/executing) time, LongTasks might not capture the case unless it's over 50ms correct?

It would be very useful to have stats on each JavaScript for the parsing time and initial execution time via ResourceTiming. In fact, I was just digging into trying to do this yesterday with https://github.com/danielmendel/DeviceTiming

tdresser commented 6 years ago

Sorry, yes, long tasks will only handle a subset of the JS case.

How high priority is the sub 50ms JS case?

colinbendell commented 6 years ago

I think Image/Video parsing time and JS parsing time have different use cases and should probably be split to two different specs.

  1. With media timings (image/video/font) you want to know how long was the user waiting for pixels on the screen. This is slightly wider use case than Element timing. That is, how long from the time the UA knew to draw something did the network turn the content around (RT) and then decode and show the content on the UA device's active viewport. While decode time is important, it can be less relevant if the media content is off screen, below the fold or otherwise covered. Hero Image/element timing assumes there is minimal distinction between decode and in-view, but I would argue that this is an important distinction in the real-world as a validation metric for content creators. More generally, media content timing should focus on the timings of a) discovery (dom parsing? RT initiator?), b) network transfer (RT) c) decode & paint timings and d) viewport visibility timing: when the pixels finally first show on the UA active viewport. (this is important for below the fold prioritization and discovery, or even ascertaining when delays cause content to be painted too late, when the user has already scrolled out of view - see medium as a classic use case).

The owner of the media timing is the visual & creative designer and operations.

2) With JS parsing & executing timing the developer is focused on the variability of the decode as a critical path item to overall page responsiveness and usability. The follow-on metrics of JS-execution are generally handled by LongTasks and UserTiming. The owner of this timing is the front-end developer. Using this timing results, I would expect her to take action by optimizing js code & bundlers to address device variances.

JS parsing might belong more naturally in RT, while the media timings might make sense to spin off separately.

tdresser commented 6 years ago

Hero Image/element timing assumes there is minimal distinction between decode and in-view, but I would argue that this is an important distinction in the real-world as a validation metric for content creators.

My hope is that while this will be true for the initial version of element timing, we'll add more granular timing information to element timing in successive versions.