Open baimafeima opened 7 years ago
None of that has anything to do with the resulting quality of experience of the software, at all. In fact the only thing it seems to do is to serve a method to be bitchy about other projects that have to be patched to run on Solus, and detracts from providing a positive experience.
^ Post-wake-up: Long story short I don't want it to be bitchy. I want it to focus on the good points of the resulting software as applicable to the user
I think knowing about actual and potential problems with a particular software helps to make a more informed decision whether to install and use a particular package and will lead to a more positive computing experience as a result of that decision.
There may be satisfaction with a well-designed and well-curated software (immediate software experience) but a closer look may reveal architectural flaws and issues that may affect a user such as vendor lock-in or lack of compatibility (interoperability), potential and known privacy violations buried in licenses or privacy statements, required sign-ups or registration in order to use an application, limited features as part of a freemium model, low or missing encryption standards compared to other software of the same type, etc. This is information a user should have readily available.
I am sure there is a way to write this more positively though (e.g. in the case of IM apps: end-to-end encrypted, decentralized, security-audited, zero logging policy, etc.) but the real benefit I think results from the ability to compare the scorecards of applications of the same type. Obviously, there would always be some that score lower while I believe the majority of applications would receive a relatively high score (as many problematic packages don't land in the repository in the first place) on a range from A to F. I believe there is also a positive side-effect of such a scorecard as many developers may want to feature such a scorecard label on their website and this in turn will draw attention to Solus.
At the end of the day I run a Linux distribution not a magazine. I'm not a mouth piece on other software and the only thing the user cares about is: "Is it shit?". Normal people do not give a damn about encryption standards or whether the dev flosses.
Some more ideas:
https://github.com/coreinfrastructure/best-practices-badge
http://manifesto.softwarecraftsmanship.org/
Ratings from https://goreportcard.com/
Something like https://tosdr.org/ with ratings from Class A
to Class E
I don't like the usual user-based ratings of application packages (see Linux Mint) as these are too subjective, not transparent and very often refer to a particular version or a particular problem in time.
Instead, I would suggest to introduce a scorecard to evaluate a software project as a whole and to more objectively and transparently inform users about the status/quality of an application from multiple perspectives. This would also be a great way to give feedback to developers in an efficient way and to hopefully inspire them to improve their ratings. On the other side, it encourages Solus users to participate in the software curation process and blends in nicely with Solus goals of providing best-in-class applications and to make sure that they are constantly maintained and improved.
Criteria that could be taken into account: development model (open-source, closed-source/security through obscurity), quality of code, e.g. downstream patches required or known security vulnerabilities which are not yet fixed, general release practices (tarballs, verified/signed, time of last release), trustworthiness of the project, usability and general user-friendliness, AppStream data completeness, commitment to distro agnosticism/cross-platform, external security audits, commitment to privacy by design principles, etc.