Closed alice closed 5 years ago
(As @dbaron points out, this is imperfect in the case of scores which fall right on the boundary between buckets.)
(As @dbaron points out, this is imperfect in the case of scores which fall right on the boundary between buckets.)
Well that's the point of bucketization.
There is certainly some effective precision level, a cumulative score of 2.0 versus 2.0001 is likely not meaningful. But a score of 2.0 versus 1.5 is meaningful.
One challenge here is that the API exposes the per-frame score and not the cumulative score. It's up to the developer to add them up. I don't think we want to bucket the per-frame score, as that could introduce an accumulated "drift".
Got it. Perhaps we just need some developer guidance in the spec on how to use and interpret the score.
Additionally, some more fleshed-out usage examples would probably be useful.
Related to the interoperability issues mentioned by @dbaron in https://github.com/WICG/layout-instability/issues/23, and to the questions raised by me and @lknik on the TAG review thread at https://github.com/w3ctag/design-reviews/issues/393.
I'm interested in what the effective precision or granularity of the score may be. In particular, might there be a granularity at which we might expect to have some level of interoperability? And, would making the score more granular help erase meaningless variation in the score?
For example, could we potentially allocate a limited number of buckets such as (per @skobes' comment) low, medium and high, and return an enum indicating which of these buckets the computed score falls in? What likelihood would that have of being both useful and interoperable? Would a finer granularity be meaningful?