Open gabibguti opened 1 year ago
Since scorecard
mostly cares about consumers I'm going to comment on this with my consumer hat on. As a consumer I don't think that those badges can be used to actually assess projects because their badges are rarely up-to-date (for example openssl
was last updated in 2016 and systemd
in 2018). The fact that projects evaluate themselves doesn't make those badges any more trustworthy.
I'm not sure it should be fully removed but it shouldn't penalize projects that haven't filled out those forms.
Also trying to figure out why this check was included in scorecard I found 20cafaee4aadc23eeb780d13c2258e8048c2ac63 with no rationale behind it. It would be great if it was possible to figure out what exactly this check is supposed to accomplish. My guess would be that it's supposed to raise awareness (and I think all in all it's a good thing) but I'm not sure why it should penalize projects.
Stale issue message - this issue will be closed in 7 days
This issue is stale because it has been open for 60 days with no activity.
I disagree & recommend rejecting this issue.
Scorecard lets you do a quick automated check, but there are many things you can't automatically check. Best Practices badge has many more criteria, and allows consideration of many more issues. Scorecard can automatically determine if a badge is in progress (and what its level is), enabling a quick estimation of risk.
I wonder why those badges should be trusted? How can I be sure that they reflect reality?
I've just taken a look at https://github.com/coreinfrastructure/best-practices-badge/blob/main/docs/vetting.md again and it still says that "Reduced incentives. We intentionally reduce the incentives for people to create false information". I don't think it's true anymore since the SOS reward doesn't look like a reduced incentive to me. Either way I'm curious what sort of vetting is actually involved?
Just to be a little bit more specific I also took a look at a badge of a project I accidentally saw elsewhere recently: https://www.bestpractices.dev/en/projects/6358#analysis. It says
It is SUGGESTED that if the software produced by the project includes software written using a memory-unsafe language (e.g., C or C++), then at least one dynamic tool (e.g., a fuzzer or web application scanner) be routinely used in combination with a mechanism to detect memory safety problems such as buffer overwrites.
We have implemented unit tests for all the fuzzing failures identified.
I don't think that this condition is actually met. Unit tests aren't the same as continuous fuzzing and that project isn't fuzzed on OSS-Fuzz for example: https://github.com/google/oss-fuzz/pull/9635. There are no "usual" fuzz targets upstream either. It could be it's fuzzed downstream of course but it would probably be mentioned explicitly but there are no links anywhere.
I wonder how that particular answer was vetted?
I think another option would be to throw out badges that are clearly out of date. For example looking at https://www.bestpractices.dev/en/projects/569#security and all those dead links from 2017 it's clear that that badge isn't actively maintained. I'm not sure how it can be used to infer things about the current state of the project.
Either way I think it's a low-quality signal and it should be revisited one way or another. If all the badges were actually maintained and vetted they would be great in terms of assessing projects but I'm pretty sure that's not going to happen.
This issue has been marked stale because it has been open for 60 days with no activity.
Is your feature request related to a problem? Please describe. I am frustrated that Scorecard requires projects to have an OpenSSF (formerly CII) Best Practices Badge. For me, Scorecard and the OpenSSF Best Practices are different tools to evaluate a project and evaluate different aspects of a project. Therefore, they should be independent.
Describe the solution you'd like I'd like us to remove the CII-Best-Practices check.
Describe alternatives you've considered An alternative, would be to make the CII-Best-Practices optional. Then, we still enforce the usage of OpenSSF Best Practices, but without making it a requirement for the user to achieve a 10/10 in Scorecard overall score.
Additional context None.