Closed afermg closed 7 months ago
Thanks for trying this out! This is an experiment, so it's worth keeping the pros and cons in mind as we try it out.
Potential benefits: increased fairness, transparency, objectivity, reproducibility, encourages participation.
Potential downsides: added complexity, the risk of discouraging organic collaboration, and the possibility of gaming the system or overemphasizing quantity over quality.
This is not an urgent PR. Based on a chat with @shntnu, we are coming up with a way to distribute code reviews in a way that is fair to everyone. To do so, we first ought to know who is reviewing and has reviewed other PRs before. This is not directly accessible through GitHub so I use one of dogsheep's tools to fetch it. Then I use datasette to visualise it on the browser, which makes it universally accessible.
The current way to do this requires the installation of two packages (github-to-sqlite and datasette). I added pyproject.toml and poetry.lock files to make this reproducible. Producing the data requires one command and opening in a browser another, making it pretty accessible. Adding the toml and lock files may be an overkill, but it ensures reproducibility.
The alternative is to use Github Actions to automate this and put the .db somewhere accessible, then access it with datasette-lite. The downside is that managing credentials and auth to upload the updated database somewhere will add overhead. I'd like to get opinions from @leoank on this matter, even if I'm not suggesting him as a reviewer.
Other opinions on how to distribute code-reviews in an unbiased manner are welcome.
Let me know if anything else needs to be done to merge this. I didn't put it in ./libs as it is more of a management module, so I created the ./management/ folder for these things.
See example of asignee's visualisation. Here the number on the right of the usernames is the number of pull-requests in which they are involved:
Instructions on how to reproduce this can be found here.