Closed FiloSottile closed 3 years ago
Thanks for outstanding work the Security Team has been doing, FIlippo!
It would be helpful if the issues were also tagged to distinguish between:
The distinction is not cut-and-dry in this era of automated CI/CD deployments and state-level actors engaging in supply-chain attacks, but it would help users assess whether the release warrants expedited deployment or not.
Ah, that's a good idea, @fazalmajid. The two classes do require different preparations, sometimes even by different teams, so it makes sense to mention that in the pre-announcement.
We can use a statement like this in pre-announcements: "The upcoming Go 1.A.B and Go 1.X.Y releases include fixes for HIGH severity (per our policy at golang.org/security) vulnerabilities in the Go toolchain / in the Go standard library / in both the Go toolchain and the Go standard library."
Why do you want to use three tier severity scale, when common practice in industry is a four tier scale? Even in the mentioned in the proposal OpenSSL policy there is four categories scale. Wouldn't it be better to follow common practice and use four tier system? You may take a look at the CVSSv3 specification, and correlate your severity scale with the severity rating scale described there (https://www.first.org/cvss/specification-document).
The mentioned handling procedures could be similar for Medium and High severity issues. You will only achieve some flexibility to better express the potential impact of the vulnerability.
Thanks for outstanding work @FiloSottile
For CRITICAL
security issues, It is possible to notify some major companies using Go ( such as cloud vendor ) in advance? Because they manage a large number of services, they need more time to prepare for changes.
@p-rog I prefer to ask why introduce a fourth tier, when we wouldn't do anything differently for it? What benefit would it provide? How would we pick what's a MEDIUM and what's a HIGH? What would we communicate to users about how differently they should treat them?
The current criteria are clear: LOW are things we are comfortable fixing in public, CRITICAL are things we want to fix right now, HIGH is everything else. It's not an easy assessment to make, but it's a necessary and useful one. What would be the criteria for MEDIUM?
CVSS is an excellent example of how these scales break down when they try to apply more rigid and abstract criteria to software that's reused in diverse contexts. In my experience, CVSS is unusable for anything that is not a piece of software that's deployable on its own: for example, how do you pick remote vs local exploitation for a library? If it's used on remote inputs, it's remote, if it's used on local inputs, it's local! (This is not a made up example, different distributions scored the recent RCE in libgcrypt differently with CVSSv3 because of rating it local vs remote. In our scale, it'd be clearly a "let's fix that right now", so a CRITICAL.) A standard library is the ultimate context-dependent software, so it would be especially meaningless for us to try and use criteria like the CVSS ones.
I do not understand the go get
policy. As far as I understand, go get
will run the system toolchain if the downloaded package requests that. The system toolchain has not been designed to avoid code execution. This means that go get
will always be reasonable effort only in terms of avoiding code execution, and cannot provide any strong guarantees. Classifying code execution on go get
as High seems problematic, given that you can hide and patch only the Go parts.
@FiloSottile I understand your point of view. Your proposed severity levels are in direct relation to how you want to handle these cases. But it's not the purpose of the severity rating. The severity rating should show how serious the vulnerability is. That how you will handle the cases is of course in relation to the severity level, but severity scale takes into account the potential risk of the discovered vulnerability. Maybe take a look at Red Hat security ratings (https://access.redhat.com/security/updates/classification).
In regards to the CVSS, it's not ideal, because not each use case can be included into one CVSS score for a vulnerability. But, the worse scenario should be taken into the consideration in CVSS calculation. Then CVSS makes sense. Could be HIGH but in some scenarios like if an application uses only local inputs the impact could be lower and the CVSS could be different and this will be covered by the application vendor. In the other words, a flaw in standard library from your side should be analyzed in relation to the worst possible scenario and based on that you should assign the best Severity level and express in CVSS.
Based on my experience the three tier scale can't handle all cases. That's why four tier scale is more popular in industry. I agree that sometimes it's difficult to decide if it's more MEDIUM or HIGH. But from the future perspective it's still better than three tier scale.
In the other words, a flaw in standard library from your side should be analyzed in relation to the worst possible scenario and based on that you should assign the best Severity level and express in CVSS.
If analysed in the worst possible scenario, no vulnerability in the standard library (and arguably in any library) is ever going to be local, since applications might take remote input and pass it to the library, but that score is not going to be particularly useful to most users.
However, it's true that we might be misusing the concept of severity, especially if we'd score any non-CRITICAL vulnerability as LOW if it's already widely known and not worth fixing in private.
Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?
I do not understand the
go get
policy. As far as I understand,go get
will run the system toolchain if the downloaded package requests that. The system toolchain has not been designed to avoid code execution. This means thatgo get
will always be reasonable effort only in terms of avoiding code execution, and cannot provide any strong guarantees. Classifying code execution ongo get
as High seems problematic, given that you can hide and patch only the Go parts.
That's a good point. I'd be open to declaring go get
code execution protections best-effort, and rating those fixes LOW (or PUBLIC, or whatever equivalent rating). Most other language ecosystems have code execution at build time, so it's not a common security expectation. @rsc?
(In general, we should progressively document the security expectations of the various parts of the distribution, but that's beyond the scope of this proposal.)
Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?
If it won't be called Severity scale but maybe "Handling scale" or "Handling types" then it's a very good idea! Then the severity rate you can assign based on your judgement when assigning CVE. Of course if you would like to have a severity ratings.
Most other language ecosystems have code execution at build time, so it's not a common security expectation.
If we have a vulnerability that can cause code execution while downloading (but not building or running) module dependencies, such as for go mod download
or go get -d
, then I would prefer that we treat those as HIGH severity. It's one thing to expect that users audit their dependencies; it's another altogether to expect them to audit their dependencies before they even download the source code.
Maybe we should rename the tiers PUBLIC, PRIVATE, and URGENT (or something similar if anyone has better ideas?
That reads a little odd to me, and that it's too focused on the mechanism rather than the criticality; I think people who work with security concerns at their orgs but are not deep in the Go ecosystem would be confused to hear "a PRIVATE level security issue has been discovered and will be addressed in release X.Y on date Z".
The original LOW/HIGH/CRITICAL sounds fine to me, FWIW.
That reads a little odd to me, and that it's too focused on the mechanism rather than the criticality; I think people who work with security concerns at their orgs but are not deep in the Go ecosystem would be confused to hear "a PRIVATE level security issue has been discovered and will be addressed in release X.Y on date Z".
The original LOW/HIGH/CRITICAL sounds fine to me, FWIW.
But the proposed Severity scale is based on how cases will be handled, that's why it would be better to call it "Handling scale" or "Handling types" with levels PUBLIC, PRIVATE, and URGENT.
The severity scale should be directly related to the impact of the flaws.
To be clear, if we do switch to something like PUBLIC, PRIVATE, and URGENT, we will not surface those labels in announcements. We'll simply pre-announce an undisclosed vulnerability fix for PRIVATE vulnerabilities, and just list them in the release announcements for the rest.
ah ok thanks :)
I updated the proposal to refer to PUBLIC/PRIVATE/URGENT tracks rather than severity, based on the feedback in this thread.
Based on the discussion above, this proposal seems like a likely accept. — rsc for the proposal review group
No change in consensus, so accepted. 🎉 This issue now tracks the work of implementing the proposal. — rsc for the proposal review group
Change https://golang.org/cl/352029 mentions this issue: _content: update security policy
Change https://go.dev/cl/393357 mentions this issue: internal/history: split Release summary into bug- and security fixes
Background
The current Go security policy, golang.org/security, dictates that whenever a valid security vulnerability is reported, it will be kept confidential and fixed in a dedicated release.
The security release process is handled by the Security and Release teams in coordination, and deviates from the general release process in that for example it doesn't use the public Builders or TryBots. This led to issues going undetected in security releases in the past.
There are no tiers, and the distinction is binary: either something is a security fix, or it’s not.
Security releases are pre-announced on golang-announce three days before the release.
We’ve issued six security releases in the past eight months, on top of the eight regularly scheduled point releases.
Proposal
We propose introducing three separate tracks for security fixes.
The Security team reserves the right to choose the track of specific issues in exceptional circumstances based on our case-by-case assessment.
We also propose the following handling procedures for each track.
All security issues are issued CVE numbers.
Motivation
Fundamentally, this proposal is about making the security policy scale.
Every package can be used in many different ways, some of them security-critical depending on context. So almost anything not behaving as documented can be argued to be a security issue. We want to fix these issues for affected users, but doing so in separate security releases imposes a cost on all Go users. With each security release, the Go community needs to scramble to assess it and update. If security releases become too frequent, users will stop paying attention to them, and the ecosystem will suffer.
The introduction of the tracks helps the community assess their exposure in each point release, and merging the security and non-security patch releases will lead to fewer overall updates and a more predictable schedule.
Originally, the rationale for dedicated security releases was that there should be nothing in the way of applying a security patch, like concerns about the stability of other changes. However, since security releases are made on top of the previous minor release, this only works if systems were updated to the latest minor release in the time between that and the security release. This time is on average two weeks, which doesn’t feel like long enough to be valuable. It’s also important to note that only critical fixes are backported to minor releases in the first place.