Open foolip opened 4 years ago
@ddbeck is there a team I can ping for feedback on an issue like this, with all active reviewers? I suspect not, so my question is: how should I circulate an issue like this?
@saschanaz @sideshowbarker I see and appreciate your rocket reactions :) Would you happen to have thoughts on what would most speed up your reviews in BCD?
@foolip
is there a team I can ping for feedback on an issue like this, with all active reviewers? I suspect not, so my question is: how should I circulate an issue like this?
No, and I'm not sure how to approach this. I'm going to think on it.
Also, since I'm not 100% sure I'll get to this today and I cannot contain my enthusiasm any longer: I am extremely excited at the prospect of this.
Would you happen to have thoughts on what would most speed up your reviews in BCD?
I'm not really a reviewer here but inline annotation per file sounds even better. I've seen some annotations with colors so it would also be great.
Here's a sample of what it currently looks like for #6838: … I could quickly spot when data for other browsers than expected was being modified, and most importantly I didn't have to spend as much time understanding the diff context.
Yup, the sample looks very useful
- Where should the summary be shown? Options are modifying the PR description, posting a new comment, inline annotations on the individual files, or as the output of a check.
Output of a check would be the least disruptive and least prone to breakage
Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage.
Those assessments are from my experience with similar things we tried for WPT.
Inlne annotations on the individual files sounds quite complicated.
- What's a good way to visualize the changes? Options that come to mind are a table with entry/browser/before/after columns, or as a single compat table with the changes somehow highlighted within, perhaps as insertions and ~deletions~.
I don’t have any strong opinions on how that part should be done, nor do any good suggestions come to mind for me.
Output of a check would be the least disruptive and least prone to breakage
Yeah, that's what we did for WPT, with a custom GitHub Check that summarizes test results. Example: https://github.com/web-platform-tests/wpt/pull/26089/checks?check_run_id=1244245864
The downside is it's a web service to maintain (react to webhooks + do stuff + create checks with outcome) and not very discoverable.
Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage.
Those assessments are from my experience with similar things we tried for WPT.
With much fewer reviewers in BCD, I wonder if we could get sidestep some of these concerns. Modifying the description is very widespread in the form of PR Preview, and posting comments is also common in the form of coverage bots. Both can be annoying, but I feel like I could live with either, especially the former.
Inline annotations on the individual files sounds quite complicated.
Do you mean complicated for reviewers to understand, or complicated to implement? It's not very hard to implement I think, I've successfully done it in experiments using both the API for writing annotations directly, and the stdout-parsing support that's built into GitHub Actions. One challenge would be only determining which line to write the annotation for, but a naive approach could be the first edit for each file.
Output of a check would be the least disruptive and least prone to breakage
Yeah, that's what we did for WPT, with a custom GitHub Check that summarizes test results. Example: https://github.com/web-platform-tests/wpt/pull/26089/checks?check_run_id=1244245864
The downside is it's a web service to maintain (react to webhooks + do stuff + create checks with outcome) and not very discoverable.
Yeah, I agree about those being downsides in general. I think the discoverability downside would not be a problem for us in practice for our use case here. But the maintenance issue is real.
Modifying the PR description would be the most disruptive (and potentially most annoying) and the most prone to breakage. Those assessments are from my experience with similar things we tried for WPT.
With much fewer reviewers in BCD, I wonder if we could get sidestep some of these concerns.
Yes, agreed.
For WPT, I personally never found either the description-modifying option nor comment mechanism annoying. I just know that others did.
Modifying the description is very widespread in the form of PR Preview, and posting comments is also common in the form of coverage bots. Both can be annoying, but I feel like I could live with either, especially the former.
Yes, same here. I would be fine with either of them. I’d also be fine with the output-of-a-check option. So I guess I really don’t feel strong enough about any of those being undesirable.
Inline annotations on the individual files sounds quite complicated.
Do you mean complicated for reviewers to understand
No
or complicated to implement?
Yeah, I just meant I imagined it being complicated to implement
It's not very hard to implement I think, I've successfully done it in experiments using both the API for writing annotations directly, and the stdout-parsing support that's built into GitHub Actions. One challenge would be only determining which line to write the annotation for, but a naive approach could be the first edit for each file.
OK, knowing that, I have zero objections to that inline-annotations approach — in fact, from a usability point of view, it sounds like it could be the most helpful, by virtue of providing the information at point-of-use. And given that in https://github.com/mdn/browser-compat-data/issues/6863#issuecomment-706244432 @saschanaz weighed in in favor of inline annotations too, I guess I’d put my vote there also.
Thanks for opening this issue, @foolip. I am extremely excited about the prospect of a summary for reviewing PRs. This would be an outstanding addition to the project.
Where should the summary be shown? Options are modifying the PR description, posting a new comment, inline annotations on the individual files, or as the output of a check.
There's been mention of all of these as options, but I'm not sure how all of them work. If anyone would like to share various annotations they've appreciated from other projects, I'd love to see what that looks like in context, with real-world examples.
But it sounds like there's broad support for a line annotation. I imagine that's a good place to start.
What's a good way to visualize the changes?
As you said, "a table with entry/browser/before/after columns" was my first thought, though I realized there's a another perspective that could useful: by browser, then by feature (e.g., suppose I work at a browser vendor and I want to provide feedback only for changes applicable to the browser I actually work on). But, honestly, just your sample output dressed up with some basic Markdown would be a great start.
Also, having more than way to represent the diff might be useful (more on this in the implementation notes bit below).
Implementation notes:
Would it be possible to produce some sort of separate intermediate representation of the diff (e.g., a JSON payload) and then generate different outputs from that representation?
I could see this being very useful for generating release notes (e.g. providing a detailed list of created/modified features) or to request reviewers by feature area or browser. The diff could be the basis for a lot of interesting automation.
The meta question:
is there a team I can ping for feedback on an issue like this, with all active reviewers?
No, we don't have that. I know there exists some GitHub feature to do this sort of thing, but I cannot for the life of me figure out what to search for to find it in the docs. If you can remember what this is called, please let me know so I (or someone else) can put it in an issue title. 😆
In the meantime, please @ anyone individually you want to get the attention of. Though with the exception of vinyldarkscratch, I think we've got the most active contributors right now already participating.
The meta question:
is there a team I can ping for feedback on an issue like this, with all active reviewers?
No, we don't have that. I know there exists some GitHub feature to do this sort of thing, but I cannot for the life of me figure out what to search for to find it in the docs. If you can remember what this is called, please let me know so I (or someone else) can put it in an issue title. 😆
I believe GitHub teams can’t be created at the repo level but instead need to be created at the organization level.
So in our case, that means somebody who has owner perms for the https://github.com/mdn/ org would need to go into the UI at https://github.com/orgs/mdn/teams and create whatever teams we needed.
When reviewing a PR with a lot of changes, I spend most of the time expanding the diff to see which entry is being modified. This is especially annoying when reviewing changes towards the end of the browser list, where multiple clicks are needed to reveal the relevant context.
@vinyldarkscratch and I have discussed a GitHub Action workflow to produce a summary of changes as a PR comment (or something) to speed things up. I have written a proof-of-concept
npm run diff
in https://github.com/mdn/browser-compat-data/pull/6862 which I've used on a few PRs today, and found that indeed it increased my review velocity by a lot.Here's a sample of what it currently looks like for https://github.com/mdn/browser-compat-data/pull/6838:
I could quickly spot when data for other browsers than expected was being modified, and most importantly I didn't have to spend as much time understanding the diff context.
If others think this might be useful, there are a number of questions before making it real:
deletions.The choices will inform how this has to be implemented. Thoughts appreciated.