ossf / scorecard

OpenSSF Scorecard - Security health metrics for Open Source
https://scorecard.dev
Apache License 2.0
4.57k stars 496 forks source link

Factor whether or not private vulnerability reporting is enabled into the scorecard #2465

Open JasonKeirstead opened 1 year ago

JasonKeirstead commented 1 year ago

Github has finally added the ability for repository owners to turn on private vulnerability reporting, to make disclosing vulnerabilities in a secure manner easier for all parties involved.

https://docs.github.com/en/code-security/security-advisories/guidance-on-reporting-and-writing/privately-reporting-a-security-vulnerability

The option is not enabled by default right now.

I believe having this enabled should be considered a best practice and factored into the scorecard.

david-a-wheeler commented 1 year ago

It's currently in beta, let's make sure it works first. Also: do we want to mandate that GitHub projects use it (the implication here)? Some may want to report in another way, and the real goal is reporting not the mechanism.

katzj commented 1 year ago

Yeah, I think this would be best as counting as success for various parts of the Security-Policy check

github-actions[bot] commented 1 year ago

Stale issue message - this issue will be closed in 7 days

github-actions[bot] commented 11 months ago

This issue is stale because it has been open for 60 days with no activity.

spencerschrock commented 11 months ago

In terms of detecting this, I'm not sure what endpoints we'd have available.

There's one to publish an advisory, but I haven't stumbled across one to query if a project has it enabled. https://docs.github.com/en/rest/security-advisories/repository-advisories?apiVersion=2022-11-28#privately-report-a-security-vulnerability

github-actions[bot] commented 9 months ago

This issue is stale because it has been open for 60 days with no activity.

laurentsimon commented 8 months ago

there's now an API for it https://github.blog/changelog/2024-03-08-check-if-private-vulnerability-reporting-is-enabled-via-rest-api/

Who's interested in implementing this new probe? Could we add this probe to the security policy check maybe?

Thanks @josepalafox for the info!

josepalafox commented 8 months ago

And @katecatlin

KateCatlin commented 8 months ago

Hi all! As just got shared above, we did indeed add this endpoint just today to help you and others check if private vulnerability reporting is enabled at scale.

Keep us posted on how the implementation goes! This is a really exciting implementation and we completely agree that it's a strong indicator of a repository taking security seriously.

spencerschrock commented 7 months ago

Hi all! As just got shared above, we did indeed add this endpoint just today to help you and others check if private vulnerability reporting is enabled at scale.

Just to confirm this is just REST for now, or graphQL as well? I see a hasVulnerabilityAlertsEnabled field in https://docs.github.com/en/graphql/reference/objects#repository, but I think that corresponds to this REST API?https://docs.github.com/en/rest/repos/repos?apiVersion=2022-11-28#check-if-vulnerability-alerts-are-enabled-for-a-repository

pnacht commented 7 months ago

Just to confirm this is just REST for now, or graphQL as well? I see a hasVulnerabilityAlertsEnabled field in https://docs.github.com/en/graphql/reference/objects#repository, but I think that corresponds to this REST API?https://docs.github.com/en/rest/repos/repos?apiVersion=2022-11-28#check-if-vulnerability-alerts-are-enabled-for-a-repository

Yeah, that graphQL field is to check for Dependabot Vulnerability Alerts, not Private Reporting.

KateCatlin commented 7 months ago

Yes, confirming this is only on the REST API for now :)

Chealer commented 7 months ago

[...] Also: do we want to mandate that GitHub projects use it (the implication here)?

Of course not. Scorecard is about scoring, not mandating, and GitHub's new feature is just one of countless tracking mechanisms.

Some may want to report in another way, and the real goal is reporting not the mechanism.

The goal is tracking, not reporting. What mostly matters is the ability to inform enough people with a high (qualification to solve/maliciousness risk) ratio about discovered/suspected vulnerabilities, avoiding to disclose to people with a low ratio. What we need is a metric estimating issue tracking performance. Publishing a maintainer's street address is good, but having a proper ITS is way more reliable and efficient. GitHub's new feature could be even more interesting than any of these in some situations. Even better mechanisms allow reporters to claim their report while controlling its visibility.

If a project which has a public discussion forum has a 3/10 score, it might reach 7/10 when it's had a good but public-only ITS for 5 years. Adding GitHub's feature to that mix could raise to―say―8.

I'd suggest calling the metric Issue tracking. I think it would be complicated to evaluate public and private reporting in separate metrics, but it could be given some thought. Ideally, this would also consider other aspects:

pnacht commented 7 months ago

My solution here would be to simply include this information in the Security-Policy check.

If the project has a solid security policy that gets a 10/10 already... I honestly don't really care whether they have this feature enabled. However, if the project doesn't have a security policy but has this feature enabled, then that's already pretty great! I'm spitballing here, but I assume most serious security reports come from security researchers, the vast majority of which are probably aware of this feature and would check for it manually if necessary.

So my suggestion would be:

diogoteles08 commented 7 months ago

I think I agree with Pedro in most points, but I also concern about preventing the following scenario:

A project has a valid Security Policy (currently scoring 10/10), and in its content points to report security vulnerability using GitHub's private vulnerability reporting (not mentioning any other contact method) but the feature is not enabled.

I know it might seem rare, but last year GOSST Upstream Team worked to suggest Security Policy for some repositories and we faced that scenario in a relevant amount of times.

Maybe we can use some regex to evaluate if the Security Policy suggest GitHub's tool and, if that's the case, we require that it has it enable in order to get 10/10 ?

pnacht commented 7 months ago

That sounds like a good idea, yeah. Dunno if it needs to be done all in one go, though. A PR that simply rewards users for having the feature enabled seems simple enough; a second PR can later add the nuance of "they point to this feature, but don't have it enabled".

diogoteles08 commented 6 months ago

FWI, this issue doesn't have a correspondent feature for GitLab, so it's probably exclusive to GitHub.

If one wants to report a vulnerability on a GitLab project, GitLab suggest non-maintainers to create Confidential Issues, which are enabled by default.

david-a-wheeler commented 6 months ago

@diogoteles08 - it sounds like "Confidential Issues" are more or less the corresponding feature. Can outsiders determine if they're enabled?

diogoteles08 commented 6 months ago

I'm not a long-term GitLab user or anything like that, but by my quick research the "Confidential Issues" are enabled by default -- it's a simple checkbox that you can check when your raising any issue.

More specifically, I think it's true that "if I can create an issue for the project, I can create a confidential issue", so if you can't create an issue for the project I think you can't create a confidential issue. But I don't think this should be a concern, because if a project is not open for new issues, it's likely that it's being really developed and accepting contributions in some other platform (like github or even mailing lists), and has only a mirror on GitLab, for example.

david-a-wheeler commented 6 months ago

@diogoteles08 - I think Scorecard should (eventually) check if there's a way to report vulnerabilities. If it's on GitLab, eventually we should check if confidential issues or an email address for reporting vulnerabilities is provided. It's okay if that's not in the initial implementation, it's just good to have that as a long-term objective.

Having no obvious way to report vulnerabilities is a sadly common problem :-(.