ddbeck / common-web-feature-mockup

1 stars 1 forks source link

Case study: Offscreen Canvas #7

Open ddbeck opened 1 year ago

ddbeck commented 1 year ago

Last week, @foolip nudged me with this idea:

How about https://caniuse.com/offscreencanvas, can you try to group together BCD features that make up that feature and see if you get the same answer?

Screenshot 2022-12-14 at 16-45-02 OffscreenCanvas Can I use  Support tables for HTML5 CSS3 etc

I set about to do just that and I'm opening this issue to record what I did and some consequent discoveries.

What I did

To see if I could generate something that might be readily consumed by Caniuse, I tried to duplicate something it already could do: represent Offscreen Canvas support.

  1. I created a JSON representation of a feature group which consisted of a flat list of 71 mdn/browser-compat-data (BCD) features, drawn from:

    • OffscreenCanvas
    • OffscreenCanvasRenderingContext2D
    • HTMLCanvasElement

    and their various methods, properties, and other descendant features.

  2. I ran the ./src/cli.js script against that feature group. It produced support results that closely matched caniuse, summarized here:

    Browser Since version Since date
    Chrome 69 2018-09-04
    Firefox 105 2022-09-20
    Safari N/A N/A
  3. I shared the results with Philip, who asked, "Did you have to make a decision to exclude the "contextlost" event?"

Philip discovered a notable omission: I didn't include the more-recent additions to the Offscreen Canvas API, for handling context loss and restoration.

This led to some more investigation and a number of interesting lessons learned.

Conclusions

Invite domain experts into the group authoring process and seek to detect unincorporated features

I constructed the feature list for Offscreen Canvas on my own. To make my feature list, I skimmed the MDN docs and the relevant bits of the HTML spec that I could find (for example). I completely missed the context loss and restoration features because it's not in BCD or the resulting compat table on MDN. This made the features somewhat less visible to me; it's likely that a domain expert would've known about this part of Offscreen Canvas.

What I learned:

Avoid splitting groups before they achieve consensus (a.k.a "baseline") implementation status

After figuring out the details above, I started, but have not finished, experimenting with splitting Offscreen Canvas into three groups:

  1. A group for the Offscreen Canvas API as it was before the introduction of the context-loss-and-restoration API (i.e., a group of Offscreen Canvas features as they existed at the time of Chrome 69's release).
  2. A group for Offscreen Canvas context loss and restoration (i.e., a group consisting of OffscreenCanvasRenderingContext2D.isContextLost() and the contextlost and contextrestored events for offscreen canvases).
  3. An omnibus group consisting of groups 1 and 2.

This work continues because the mockup doesn't yet handle processing groups of groups (yet). But I didn't need to finish implementation to learn some useful things.

What I learned:

Bug (now fixed): incorrect summarization of group support

In the course of all the preceding work, I did discover that my summarization of support across many features picked the wrong version and date from the pool of versions and dates that a group's support was calculated from.

For example, given a group of features which were supported from versions 50, 60, and 90, the group as a whole should be regarded as supported from version 90 (when the last of the requisite features was introduced). Instead, the script erroneously picked the earliest of the list of versions (50), because I unthinkingly reused code which picked the earliest (which made sense in another context).

What I learned:

foolip commented 1 year ago

Thanks for the write up, that's very helpful.

The discussion on avoiding splitting is especially interesting. I agree that it doesn't seem useful to distinguish between two versions of offscreen canvas, but I wonder how to think about this while a feature is supported only in a single engine. Do we treat that as an experimental and moving target, or are there cases where splitting makes sense even for single-engine features? I think trying to group more features is the best way to learn.

Regarding the bug, in addition to tests code review is also a way to catch bugs. However, I think that if we'll be comparing the output to caniuse, then we'll eventually spot almost all errors in the data+code through that process, so it's not necessary to go overboard with testing + review.

ddbeck commented 1 year ago

are there cases where splitting makes sense even for single-engine features? I think trying to group more features is the best way to learn

My hunch is that some single-engine features might have meaningful groups anyway (e.g., what if web developers don't think of a group as being a single thing? We might need to split it anyway), but I completely agree that grouping more features is the best way to learn.

it's not necessary to go overboard with testing + review

👍

atopal commented 1 year ago

Very interesting read. Thanks Daniel!