Open jayoung-lee opened 3 years ago
That's a really great and useful idea!
I believe we could use the GitHub API to do this. npm
already does this and even VSCode marketplace.
Very nice idea in this way developers safe time, also having this data will make easier choices between the similar package.
One complicating factor here is packages from monorepos (flutter/plugins and flutter/packages being prominent examples, but certainly not the only ones); repo-level metrics will be misleading for per-package evaluation.
@stuartmorgan That's a good point. I think we shouldn't display the metrics when there's no 1:1 match. Do you know how many packages are from monorepos?
I don't, although we could probably cobble together a script to make an educated guess based on crawling pub.dev links. The problem with not showing anything would be that it would likely be confusing to have a lot of the most popular packages (from a quick manual check, it looks like about 50% of Flutter Favorites, for instance) not have the quality metrics.
+1 This came up during research sessions for Flutter Vikings. Developers were also interested in seeing security vulnerabilities broken out as their own category if possible (perhaps through the use of Github tags), and # of tests.
We also talked to developers about being too intertwined with Github, but everyone we spoke with was happy to only use Github for hosting their packages as opposed to other git providers.
@stuartmorgan in the case of monorepos: could we also try to look only for commits in directories that match the package name, and issue tags that match as well (or contain the name so it would catch our p:
could we also try to look only for commits in directories that match the package name
We'd want to spot-check some of the other monorepos, but that could likely work. (There are some subtle questions to resolve first about what exactly we even want to count when it comes to federated plugins.)
issue tags that match as well
Maybe? I just checked the Plus Plugins repo, and they don't appear to be labeling consistently.
I actually have significant reservations about issue counts as a metric we actively promote though. In practice, issue counts are measures of five things:
So of five things that have major influence on issue count, only one (really, only part of one) is really any kind of indication of package quality/maintenance quality. There's so much noise that at best I think the number is meaningless. At worst, we would incentivize behavior that I would argue is undesirable, like running bots that close almost everything (which results in things like 20 one-off closed issues about the same thing, rather than one issue that collects discussion, votes, etc.)
I agree it is hard to tell anything for certain from many of these numbers, and many of them can be gamed to some extent.
Still many people use the github link to estimate how well-maintained a package is. So we should be able to extract something useful if we're careful.
If we are willing to do not-trivial queries, an interesting metric could be something based on the percentage of recent issue that a project member has interacted with in some way (comment, label, etc.) That would give some indication of whether someone is at least looking at issues.
When evaluating a package that needs regular maintenance, users visit GitHub to check the package’s recent activities and maintenance status. We can consider showing key GitHub metrics within pub.dev to save users’ trip to GitHub.
Key metrics that users mentioned in a recent user study were: