jsr-io / jsr

The open-source package registry for modern JavaScript and TypeScript
https://jsr.io
MIT License
2.54k stars 119 forks source link

Score should have harder items to improve library quality #292

Open EdJoPaTo opened 8 months ago

EdJoPaTo commented 8 months ago

The score should try to gamify the developers into reaching better code quality. Having more items in there trying to be even better than before should improve the overall quality of libraries on JSR.

For example the Mozilla Observatory does the same: Score 125/100 is possible.

Some ideas come to mind:

Yes, its more annoying to reach but you get better quality at the same time. Automated tests are there to help us improve the quality.

KnorpelSenf commented 8 months ago

I disagree that there should be different ways how TS is checked. I view the unification of TS configs as one of the biggest achievements, and I don't want to go back—especially given that there is no easy (let alone recommended) way of checking types with different configs using Deno.

EdJoPaTo commented 8 months ago

I see why people might not want to enable something like noUncheckedIndexedAccess per default. It works without and has better code quality when enabled as it prevents errors.

This is the same with lints: Code may work without them, but the lints are helping on ensuring it does.

The point of this issue is to have more checks on the score. What kind of checks is another question. When there are useful lints, go for them. Currently, TypeScript is a common factor between all the runtimes so utilizing its features is a good idea in my opinion.

PoliWen commented 8 months ago

I totally agree that it's too easy to get 100 points now. Shouldn't we add some more challenging options, such as assessing code quality, analyzing the utility of packages, or supporting user rating and voting

marvinhagemeister commented 8 months ago

FYI: We purposely picked options that are easy to achieve to make it easy for folks to publish to JSR. If we add options with a high barrier to achieve then those risk putting people off to publish on JSR in the first place and thus might hinder adoption.

EdJoPaTo commented 8 months ago

That’s why I think a score that can get higher than 100 might be a good idea. 125/100, like I said in my first post is something that other tools do and I like. It allows to reach the 100 and it allows for reaching even more.

Positive reinforcement instead of „punishment“ for being not as good.


Another idea that came to my mind is the score adapting over the years like rust editions do. The Ecosystem is evolving. JSR a few years ago would have allowed CommonJS and now it’s discouraged / not even allowed. To be able to do something like this the score should evolve over the years to prevent the need of yet another JavaScript Module Platform doing things better (again).

Personally I think, when the score isn’t trying to get the best out of the packages it’s basically useless and could be removed. There is no point in having something that only tells you the minimals. That would be a checklist, not a score.

KnorpelSenf commented 8 months ago

Regarding your latter point, wouldn't it suffice to adjust the scoring manually in the future when such changes occur? I believe that the last publish time is already a factor in the scoring, so modules that aren't updated frequently will automatically lose their good score.

EdJoPaTo commented 8 months ago

Regarding your latter point, wouldn't it suffice to adjust the scoring manually in the future when such changes occur?

Yeah, that should happen on a regular basis. But I havnt seen something stating that this should happen or is planned to do (I haven't read everything so I might have overlooked something like that).

I believe that the last publish time is already a factor in the scoring, so modules that aren't updated frequently will automatically lose their good score.

It is not and shouldn't be a factor. Changing the score when the criteria change is fine, but the code does not go bad over time. Open issues would be a better at indicating something seems wrong than just time (and even that is flawed, so a raw number is better than including that in the score).

jasongitmail commented 8 months ago

How about adding test coverage % and % of tests passing?

KnorpelSenf commented 8 months ago

That basically means that jsr has to run the test suite (or else the results cannot be trusted), which effectively means that jsr needs its own CI system. Is that what you're suggesting?

jasongitmail commented 8 months ago

Haven't brainstormed how it could work, only that it'd be useful as a quality indicator.

Either in the manner you describe, or accept the test & coverage results as props in a structured format from the dev's CI in the step where they publish to jsr; this would shift the CI cost to the dev, although it would be a game-able metric when done that way.

ericlery commented 8 months ago

FYI: We purposely picked options that are easy to achieve to make it easy for folks to publish to JSR. If we add options with a high barrier to achieve then those risk putting people off to publish on JSR in the first place and thus might hinder adoption.

Additional scoring would profit people who want to go extra, and those who want to consume the best packages. I think code coverage, user review per version, code examples, up votes per version, velocity of releases, etc…

All optional but that boost a package to be featured on the JSR homepage, and on the search result, and discoverability per category of package.

I've actually made a similar observation, on the code coverage side, a couple day earlier than this issue #219.

lucacasonato commented 7 months ago

We'll consider adding more items to the score in the future. Right now, the docs requirements already make it quite challenging to get a 100% score for existing packages.

In a couple of months we can consider adding more items.