rustsec / advisory-db

Security advisory database for Rust crates published through crates.io
https://rustsec.org
Other
909 stars 357 forks source link

RUSTSEC-2020-0011 is not a security vulnerability. #275

Closed hdevalence closed 4 years ago

hdevalence commented 4 years ago

PR #268 added a security advisory related to plutonium, a crate that hides unsafe usage.

However, the security advisory does not report a security issue with the crate, or any defect with its intended functionality, but makes a value judgement about whether the crate's intended behavior is good.

Security advisories are not for "crates we don't like", they're for conveying information about defects in intended behavior.

Manishearth commented 4 years ago

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"? Labeling something does not make a value judgement, warning on it does. This is why crev and geiger are the right places for this, crev is equipped to handle "this crate does not gel with my subjective values". Crev is the counterpart of the GMO label, not this database.

tarcieri commented 4 years ago

@Manishearth what about allowing cargo deny to use existing RustSec infrastructure to manage (optional) blacklists?

(Perhaps I should ask them about that idea before suggesting it up front, but)

Manishearth commented 4 years ago

Like, third party blacklist databases? Sure. I don't think it belongs here, though

tarcieri commented 4 years ago

More like a collection of crates which deliberately expose unsound behavior as @RalfJung described here:

https://github.com/RustSec/advisory-db/issues/275#issuecomment-620135319

...but rather than exposing that information through cargo audit, deliberately expose it through cargo deny instead, allowing it to use the existing RustSec infrastructure which it already integrates with.

Manishearth commented 4 years ago

Sure, sounds good

Manishearth commented 4 years ago

I also do think the database should try and distinguish between "vulnerability" and "can accidentally let you write UB" (i.e. unsound), because the social processes around fixing the two are different -- specifically the latter may not require action on part of the end user, but might on part of intermediate crates. (End users still need to know, and perhaps pressure their upstreams to verify things, or audit them themselves)

bjorn3 commented 4 years ago

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe.

fn call_asm_bytes(code: &[u8]) {
    unsafe {
        region::protect(code.as_ptr(), code.len(), region::Protection::ReadWriteExecute).unwrap();
        (*(&code as *const _ as *const fn()))();
    }
}

also let's you intentionally escape unsafe code. It doesn't by itself cause a buffer overflow. Instead you have to pass it asm bytes that perform buffer overflow. However it is unsound, as it doesn't verify the asm bytes for memory safety. This is not much different from other crates that allow you to intentionally escape unsafe.

najamelan commented 4 years ago

That analogy is facile: When you go to the store and buy GMO produce, does the cashier give you a stern look and say "are you SURE you want this"?

If that cashier is called cargo-audit and is there because this shop advertises that they have health and environmental expert at their customers disposal, and I specifically ask them if there is anything I might want to know about this product in those regards, they better.

Unfortunately that is not the analogy. It's more like I go and eat at an organic restaurant, because you know, unlike some C-burger restaurants, organic restaurants are not supposed to allow unsoundness in safe food. It so happens that I am one of those customers who always has their FDA expert and a mobile lab with them when they go out eating, so I ask my pocket expert to test the food.

The reasoning @CAD97 brings forth here is that my FDA expert should not tell me they detected GMO's in the organic food, because if they did, GMO's would be linted out of existence, which is kind of saying people really shouldn't have the choice, even if they go through lengths of specifically trying to find out about this, because I know that they will reject it and my business model only works if I can force it onto people.

I don't say that was their intention, I'm only saying the logic really resembles.

CAD97 commented 4 years ago

@najamelan the fact that you've argued for cargo publish to require the dirty flag if cargo audit is not clean damages your argument that this would be purely opt-in to pedantic checking.

najamelan commented 4 years ago

That would be my preference, and I feel it's common sense in a safety focused language, but that is not what happens (not today at least), and not what this advisory was about. It also doesn't mean that that needs to happen for all categories of advisories. What I take away from this discussion is that there is demand for:

A possible implementation for cargo publish could be that it requires a flag to publish something to crates if it contains an exploitable vulnerability, but just prints the output from cargo-audit to the console on warnings.

workingjubilee commented 4 years ago

Just like "GMO" is in actuality a very specifically legislated definition that has gaping holes you could slip a three-eyed fish through, and so people do in fact eat what they would probably label "GMO food" if its production was described to them (whether or not that is a naive decision on their part), it is not the case that people have a choice about whether or not their code includes unsafe in Rust. I assert this because it is common knowledge that the core and standard libraries rely on unsafe code and so it is challenging to write any practical program without eventually invoking an unsafe function (somewhere, deeply buried in the compiler). If the utility of such warnings as unsafe are that we can verify the code and establish for ourselves that the safe interfaces to unsafe functions are indeed safe, and the primary purpose of plutonium is to circumvent cargo geiger, then plutonium already existed even before it was written, as even if you listed off all the locations to me, it is beyond me to verify the entirety of rustc and, having experimented, it appears cargo geiger does not even attempt to trace my crate's choices with use std::.

Sure, people say that open source makes bugs easier to notice, but I sure haven't checked if all 2701 contributors to rustc are in fact not secretly all alter egos creatively devised by Ken Thompson, here to make us really reflect on trusting trust. It would have been really clever of Ken Thompson to find a way to download his mind into my brain if so, but it would hardly be the most surprising discovery of the year, all things considered. I mean... 2020.

RalfJung commented 4 years ago

@Manishearth

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with rust-lang/rfcs#2585 )

You mean like https://crates.io/crates/unsafe_fn ? :D

@CAD97 Those are some good arguments. Indeed I already pointed out the weak spot in my own argument being the definition of soundness of a macro.

Re trusted!, part of me feels like it should be called unsafe_trusted!. But maybe that's just me desperately clinging to an objective soundness condition for macros? I feel like we ought to have a systematic way across the ecosystem to figure out if calling a macro could accidentally cause UB, to use @Manishearth's formulation. But maybe that's hopeless?

So, given there seems to be rough consensus that tracking soundness violations in the DB would make sense but should be a distinct category, is macro soundness really the main remaining point of contention here? As in, @CAD97 @Manishearth would you agree that a crate with a public function like this should get an advisory filed (in the soundness category) and no amount of docs on the crate's side can make this not an unsoundness?

CAD97 commented 4 years ago

Yes, I fully agree that a publicly-exposed safe fn that performs unsafe operations without internally guaranteeing that their preconditions are met is worthy of an advisory that the function is unsound. The language has a way to mark preconditions of a fn necessary for soundness: unsafe.

The language provides no such mechanism for macros.

Tangent on the safety of macros I fully agree that the default for macros is to be safe, and if at all possible, a macro that is not safe to call should require using `unsafe` to call it by calling some `unsafe fn` internally without an `unsafe` block internal to the macro. But I also strongly believe that because the language provides no mechanism for actually marking a macro as unsafe to call, that property can only be determined from documentation. If a macro does not have a `# Safety` header in its documentation, I would still be on board with filing a soundness advisory against it. To be honest, I think the only reasonable macro that allows writing _arbitrary_ unsafe code inside it is a macro whose entire purpose is to be an alias for `unsafe`. Unsafe code is tricky, and doing any macro source transforms on unsafe code is scary.