Closed Qwaz closed 2 years ago
Afterall, all memory safety bugs are soundness bugs, and I'm concerned with the possibility of dismissing a bug with actual security impact as an informational one. It's a problem that the security research community has been trying to prevent for decades:
I think "could" in "does not contain a concrete security vulnerability, but exposes an unsound API which could lead downstream consumers to unintentionally create security vulnerabilities in their code" from this comment can mean many things here, which seems to be the source of the disagreement, but let's do it step by step. Once we agree on the definition of library bugs, I believe we can come up with less debatable description.
@tarcieri, and probably @RalfJung, could you share your opinion on this topic, where you agree or disagree on the definition of security bugs of libraries?
After all, all memory safety bugs are soundness bugs, and I'm concerned with the possibility of dismissing a bug with actual security impact as an informational one.
The point of the informational unsoundess advisories is to track more things, to track problems like RustSec/advisory-db#299, RustSec/advisory-db#298, RustSec/advisory-db#290 that currently are not tracked at all because it is unclear whether they can lead to a concrete vulnerability.
Certainly this new category should not be used to downgrade anything that would previously already have qualified as a vulnerability.
could you share your opinion on this topic, where you agree or disagree on the definition of security bugs of libraries?
The definition sounds sensible to me. Notice however that I am not a security researcher and have no experience in classification and handling of security problems. I typically stop searching when I find an unsoundness, and care little about whether that unsoundness is "purely theoretical" or actually leads to bugs, let alone security vulnerabilities. So I don't feel entirely qualified to comment here.
Here's a good example to test the policy: https://github.com/RustSec/advisory-db/pull/317. Should this be informational or be considered a vulnerability?
I think #317 is not a security bug with some ambiguity. If I understand the bug correctly, the known consequences are:
memoffset_constant_expression
feature is on. This generates trap in certain architectures (namely AArch64), which can be DoS vector in certain use cases. CONTRIBUTING.md says that not all panics are security vulnerabilities, so it needs a triage on that regard. This is the only ambiguity that I have.memoffset_constant_expression
feature is off. Per my comment in #300, I don't think it is a security bug.Deref
implementation panics. Although the outcome of it has a security impact, I don't think this is a security bug.The third point raises a very interesting question. I feel like it is in line with the question raised in unsafe_cmp RFC. Rust's Deref
documentation says that this trait should never fail. Similarly, documentation of ExactSizeIterator
, PartialOrd
, and Hash
also describe the behavior of implementations with enforcing words such as must and never.
I know one data structure library that creates a memory safety issue when given an inconsistent PartialOrd
implementation, but I didn't file a security advisory at that time because I thought this doesn't qualify as a security bug. Moreover, I can easily imagine that there will be a lot of similar cases in the wild. If user code has fluctuating PartialOrd
implementation, which is prohibited by libcore documentation, I feel like it's a bug in the user part, not in the library that assumes the PartialOrd
consistency. I think programmers have legitimate reasons to say "won't fix" for such bugs, because operations such as hashing or comparison are fundamental building blocks of the program logic, and cannot relying on their correctness may involve substantial performance cost or might not even be possible in some cases. My judgement on #317 is in line with this; It's not a security bug, not because it's uncommon, but because libcore says Deref
should never fail. If it was just very uncommon case, I would say that it is a (low impact) security vulnerability.
The current suggested definition doesn't seem to capture this aspect. Maybe we can borrow some idea from your comment and add a note about libcore interaction.
I know one data structure library that creates a memory safety issue when given an inconsistent PartialOrd implementation, but I didn't file a security advisory at that time because I thought this doesn't qualify as a security bug.
I can see that argument. However, this problem does constitute a soundness issue: safe code can create "bad" PartialOrd
implementations, so if that leads to UB, we have safe code causing UB.
Thus I strongly disagree with this:
I think programmers have legitimate reasons to say "won't fix" for such bugs.
I do not think this is a legitimate answer for any soundness bug. The recommended approach for when unsafe code really needs to rely on the behavior of client-supplied functions is to use an unsafe trait
.
Sorry about closing this issue. Feel free to reopen it.
I'm going to reopen this because I feel this is important to establish (eventually). Thanks to @Qwaz for starting this conversation.
I sadly don't have the bandwidth to participate in the discussion right now, but this should not discourage other people from doing so.
Due to my daytime environment as a security researcher, I'm admittedly much more sensitive than other people about calling something "not a security bug", and I'm really sorry if my word felt too aggressive because of that. Thank you for reopening this issue, and I'll try better this time.
I have thought about what's essential in this issue while it was locked. I think I was obsessed too much with determining whether a bug is a security vulnerability or not, thinking that the current definition of informational advisory == non-security bug
is immutable. However, what I have felt in the discussion in #300, recently filed/updated advisories, and PR feedbacks like this is that it seems that people actually want to expand the definition of informational advisory to track bugs that I would describe as low to moderate severity security bugs. Actually, npm's description of low severity bug as "address at your discretion" sounds like where lots of Rust's soundness bugs will reside.
I need a feedback on this impression. If it's confirmed that we do want to track low-severity security bugs as informational advisories, maybe the best thing to do is to update the description of the informational advisory and say that informational advisory can be used to track low-severity security bugs. I would still dislike the name "informational", but given that cargo-audit
gives warning to them, I consider it's a reasonable solution to settle on.
track low-severity security bugs as informational advisories
Not all of those are soundness issues though, right? So I guess one related question here is what would happen with a low-severity security bug that causes DoS or some kind of injection, but not UB.
Not all of those are soundness issues though, right?
Yes, that's one of the reasons that made me think that unsound informational advisory should only track bugs that are not known to be security vulnerabilities (that would not even qualify as "low severity") to be consistent with the policy for non-UB security bugs. However, I realized that it might be more useful to track low-severity bugs in informational category than confining the informational category very small.
Let's first wait for more feedback to confirm that tracking low-severity ("address at your discretion") security bugs as unsound informational advisory is what people want and discuss the remaining issues like that.
Converting to discussions
btw a concrete issue here we can also deal with around this: https://github.com/rustsec/rustsec/issues/634
Continued from the discussion from #300.
The current version of CONTRIBUTING.md doesn't specify what it means for libraries to have a security bug. Here, following the practice of NPM and other security advisories, I suggest an initial definition of "a library is considered to have a security bug if there is a known usage of its safe APIs which creates a security vulnerability." I very much appreciate a discussion and a feedback on this definition.
Repeating here from #300 with a slight revision:
Any reachable exploitable bug in a program qualifies as a security vulnerability. For executable files, the judgement is easy. If there is a program path that handles untrusted input (file, network socket, etc.) and triggers an exploitable bug, it should be considered as a security bug. However, for libraries, there is one more layer of indirection. A user of a library decide how to use APIs in their program, and whether or not a bug in API becomes an exploitable bug depends on how the user structure their program. To my best knowledge, such bugs are handled as security vulnerabilities by many existing security advisories such as CVE and NPM.
Node.js Ecosystem Triage Guidelines defines a library vulnerability as:
What's not described here is how NPM handles a potential security bug in uncommon usage of libraries. The answer is that they are also treated as security vulnerabilities, and NPM assigns different severity score depends on how likely those APIs will be used in a vulnerable way. CVE considers the similar aspect when assigning CVSS score, too.
For example, take a look at these two NPM security advisories:
They are both classified as SQL injection bugs. The first bug was categorized as low severity because the vulnerable variables are unlikely to be controlled by user input, while the second bug was categorized as high severity because there are common legit scenarios where the bug can be triggered by an attacker.
Note that they are both filed as a "security advisory." Unless we have a good reason not to follow the standard of existing security advisories, I would much prefer to classify a bug in this category as an actual security vulnerability, not an informational one.