bkardell: agree on the general direction, but there are some specific questions on the timing for prioritization. As an example, for cases where there is a public position from one organization, do we move forward with it?
foolip: the idea of having a champion is a good one. However, the restriction on face-to-face time for championing proposals should be discussed further. A possible alternative is to do some of that work offline and discuss disagreements in the meeting. 3 prioritization buckets sounds reasonable
nsull: like the idea of having a smaller/simpler set of priority signals. Preference for async vs in-person is a cultural thing and would prefer face-to-face for the most important discussion topics
jgraham: the time for presenting support does have to be limited. For cases where there is not enough public data to confirm support, it might be worth discussing it live
dandclark: ideally we want to be spending time talking about proposals that are in the margin. Logistically, it would be good to have some signaling ahead of time.
jgraham: if there is some overlap on proposals that organizations want to champion, then that would be the signal.
foolip: on championing - how do we see that playing out?
jgraham: everybody comes up with their list for championing. If more than one party does champion a proposal, we can sort out who would do that.
bkardell: have advocated splitting the proposals into groups and have organizations decide on which group to champion
nairnandu: what is the next step here? Should we try to have a dry run? One of the ideas proposed earlier was to have some reference proposals that we have consensus on.
jgraham: next step would be to write this up as a PR and ask for feedback. Not sure if a dry run would help here.
bkardell: most of the process should be things that we are familiar with. The question always is why we are picking certain things
jgraham: one thing we can do in the interim, start to gather a list of data points we want to collect for Interop proposals
nsull: yes, did bring that up in the previous meeting. As an example - survey data, developer requests, bug stars etc.
foolip: +1 on putting that on a brainstorming session
bkardell: +1. It would be great if we can share those signals and talk about it
nsull: synchronous discussion would be preferable for this
jgraham: This would give us enough time ahead of the call for proposals
Next steps: 1) PR for Mozilla’s proposal and 2) brainstorming session on developer, user and compatibility signals
jgraham: daily score and historic scores - what did the dashboard show on a given day. The next step here would be a note to Daniel on how we can incorporate that into the dashboard. We can do the back-end work, but it would require some front-end work.
foolip: happy to create a bullet list of items to show the workflow
jgraham: its for people to understand why something is broken
meyerweb: happy to do an editorial pass at this
Next step: foolip will author a PR
Consider some improvements to WPT dash #663
bkardell: an extra column or a view to show how many tests pass in browsers. This should help us uncover areas where we can come together and agree on being a priority
foolip: like the universally passing metric and is complementary to BSF.
jgraham: our PoV is that BSF did not work the way it was intended. The change in the no: of interoperable tests over time could be something we could discuss in detail.
Here is the proposed agenda for the meeting on May 31st, 2024
356
357