Open luigigubello opened 3 years ago
Thanks @luigigubello! We'll discuss this in more detail at our next workgroup meeting, and to be honest, I'm not sure how where exactly to draw the line. I'm certain others have thought deeply about this problem, and would welcome their thoughts.
Some suggestions off the top of my head:
This is a great question @luigigubello. FWIW, I am perfectly spiritually aligned with @scovetta on his categorizations above. I would not require CVEs for something to be considered disclosed - for more common, lower-impact bugs or repetitive bugs, it's pretty common not to get CVEs even though a proper coordinated disclosure happened, patches exist publicly, advisories have been published, etc.
Thank you both for the replies :raised_hands: I like your suggestions @scovetta
Publicly reported (i.e. public bug tracker, Twitter, etc.), no fix available - ??? Pro: Since it's already public, attackers will have access to this information. Withholding this information from users isn't in anyone's best interest. Con: If we post information, we may be divulging it to additional attackers, which could cause more harm.
At the moment, I think this is the only scenario we should analyze because it is not-so-obvious how we should handle it. We are aligned on the other points, perfect!
Perhaps we have a time element to it? 90 days seems to be the industry norm now, so:
I don't think there's really an industry-wide norm, but requiring a delay of more than 90 days before posting something here without a fix seems like a good idea. The point of security-reviews is to post general reviews about some software. There are separate processes for rapid vulnerability reports (like reporting to suppliers and creating CVEs). If an analysis finds a new vulnerability, we should do what we can to encourage people to use those mechanisms instead.
- Publicly reported (i.e. public bug tracker, Twitter, etc.) > 90 days ago, no fix available - OK to post
- Publicly reported (i.e. public bug tracker, Twitter, etc.) <= 90 days ago, no fix available - NOT OK to post
@scovetta I think it could be a good policy
I know that it may be a trivial question, but what do we mean by "undisclosed security vulnerability"? Do we mean that the vulnerability has no a CVE ID and it is not in any vulnerability database? In particular my question is: sometimes maintainers use the tag "Security" in some issues or PRs to identify security issues, but they don't disclose them clearly and don't assign them a particular ID or advisory, and probably these security issues are not indexed by the vulnerability databases. Can an public security issue also be an undisclosed vulnerability?