rustsec / advisory-db

Security advisory database for Rust crates published through crates.io
https://rustsec.org
Other
909 stars 357 forks source link

RUSTSEC-2020-0011 is not a security vulnerability. #275

Closed hdevalence closed 4 years ago

hdevalence commented 4 years ago

PR #268 added a security advisory related to plutonium, a crate that hides unsafe usage.

However, the security advisory does not report a security issue with the crate, or any defect with its intended functionality, but makes a value judgement about whether the crate's intended behavior is good.

Security advisories are not for "crates we don't like", they're for conveying information about defects in intended behavior.

tarcieri commented 4 years ago

It’s definitely an interesting and unusual case.

It might make sense to create a new class of informational advisories for cases like this, as those are the existing mechanism for publishing advisories which aren’t specifically an advisory of a security vulnerability (presently they are used exclusively for tracking unmaintained crates). Informational advisories are surfaced as warnings instead of vulnerabilities.

Shnatsel commented 4 years ago

I see it was filed as a regular security advisory rather than a special "informational" advisory type we use for e.g. unmaintained crates (example). @najamelan how do you feel about downgrading the advisory to a warning? This will still allow using it in cargo-deny, which I believe was the original motivation for the advisory.

hdevalence commented 4 years ago

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

myrrlyn commented 4 years ago

Co-signed. As written, the crate is a modification to the visual representation of Rust source code; it does not represent or use a vulnerability in either the compiler’s intake parser or output generator. The only change it makes is in the reader-facing representation of a narrow part of syntax.

plutonium represents a point of discussion about the philosophical approach that we as authors and viewers of Rust syntax take with regard to a class of operation. It does not represent any flaw in either the production or execution of a program, and as such, does not qualify as a vulnerability.

alex commented 4 years ago

this is code that let's you write memory unsafe code without the unsafe keyword. That's unsoundness. When it happens by accident it's a security issue, when it happens on purpose, it still is.

myrrlyn commented 4 years ago

No, it is not unsoundness. Unsoundness is specifically a term for the emission of a class of incorrect executable instructions, or a class of semantic input that causes the compiler to construct an incorrect view of the abstract program. It does not apply to the dialect of syntax that viewers observe.

tarcieri commented 4 years ago

The crate isn't unmaintained, though. By all appearances it implements its intended functionality perfectly, the only problem is that some people don't like that functionality.

Informational advisories are modeled as an enum:

https://github.com/RustSec/rustsec-crate/blob/ee8b2e5/src/advisory/informational.rs#L10-L21

pub enum Informational {
    /// Security notices for a crate which are published on <https://rustsec.org>
    /// but don't represent a vulnerability in a crate itself.
    Notice,

    /// Crate is unmaintained / abandoned
    Unmaintained,

    /// Other types of informational advisories: left open-ended to add
    /// more of them in the future.
    Other(String),
}

I think we could probably use the catch-all "notice" category for this purpose. This surfaces as a warning in cargo audit.

andrew-d commented 4 years ago

FWIW, my vote is that some sort of notification is correct here. To draw a comparison: if a crate bundled and made use of a shared object that contained a vulnerability, I would want to know about that, even if all the Rust code in the crate was safe/sound since the expected behavior ("this crate is safe and behaves as expected") is violated. Similarly, plutonium subverts the expectations and security guarantees of a developer that transitively depends on it, in a way that could result in unsafety or unsoundness, and that meets the bar for a reportable vulnerability to me.

najamelan commented 4 years ago

I'm fine with this being a warning. I just feel this should never slip unnoticed into a dependency graph, and the crate was deliberately designed to exploit a limitation of cargo-geiger so people wouldn't notice they have unsafe in certain dependencies.

It is further problematic as grepping for the string unsafe is a common review technique disabled by this crate, which warrants a warning in my opinion.

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates. If people want to write unsafe code without using the keyword at home, I couldn't care less, but this invites anyone not wanting hassle for using unsafe to just add it as a dep in Cargo.toml and I'd rather we don't make that easier and less transparent.

myrrlyn commented 4 years ago

If we're going to debate the virtue of a source file, then I'd like to respectfully rejoin that, for example, I am the author of a crate with a ridiculously high unsafe metric despite actually only having a select few parts of the codebase that are actually unsafe. The #[safe] annotation allows authors and readers alike to distinguish between "this performs an action that, while safe, is not registered as such with the compiler", and "this is unsafe and warrants significant attention".

This crate provides an opportunity to decrease the background-noise present and increase the signaling capacity of the unsafe keyword, so that rg unsafe becomes actually useful.

najamelan commented 4 years ago

@myrrlyn I'm sorry we disagree, but as a reviewer "these uses of unsafe don't merit review" is one of the assumptions you definitely want to double check, so you want to find all occurrences and not only the ones the author thought were maybe dangerous.

Manishearth commented 4 years ago

I don't think "this circumvents cargo-geiger" is enough justification. cargo-geiger is a separate tool, and one with a flawed metric that this crate neatly illustrates. If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

I don't quite agree with @myrrlyn -- I don't think one should use this crate in production code -- however, his underlying point about degrees of unsafe is one I strongly agree with. cargo-geiger does nothing to help there. Perhaps that's not the point of the tool, but it's just one of the problems with treating the count of unsafe as a security metric. At best it gives a rough filtering to areas of code you might want to look at closer.

And this advisory database is about security.

hdevalence commented 4 years ago

@najamelan

I can only conclude that the intend and purpose of this crate is to make reviewing for memory safety harder, which is not something I think should be on crates.

The thing is, security advisories are not about what you think should be on crates.io.

There is no security issue here. There should not be a security advisory, informational or otherwise. It devalues the meaning of this database.

Let's be clear: we're talking about a crate with 0 reverse dependencies and (as of this writing) 102 downloads by anyone, ever. The only reason there's an "advisory" is because someone saw it mentioned on Reddit and decided they didn't like it.

sunjay commented 4 years ago

If cargo-geiger users care about this crate being caught, cargo-geiger should handle that, not the rust security advisory database.

+1 on this and the rest of Manish's comment as well. I think this particular sentence should be the main focus of this issue. The security advisory should be removed and a check for this should be added to cargo-geiger if that is deemed necessary.

yaahc commented 4 years ago

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

CAD97 commented 4 years ago

If plutonium deserves an advisory, does my rc-borrow? It's not directly trying to hide it's internal unsafe usage, but because basically the entire API is emitted from a macro, cargo-geiger shows it as not containing any unsafe.

Or what about proc-ceasar? The point of that crate is to make your source code completely unreadable by applying the Ceasar Cypher to it. That both hides unsafe usage from cargo-geiger and makes the source code illegible to a human; it's strictly worse than plutonium thus.

It's not a security db's job to pass judgement on whether a library is "good." Both plutonium and proc-ceasar are demonstrations of how proc-macros can be used to obfuscate source code from naive static analysis. They both do exactly what they advertise. There's nothing malicious nor exploitable about the crates themselves.

You can argue that maybe they shouldn't have been published on crates. But that's not the security advisory db's point to argue.

It's also worth mentioning that this advisory won't help prevent any malicious actors from using this technique. It's trivial to reimplement an unsafe hiding macro in your own crate, and any malicious actor is going to do that before pulling in a crate whose express purpose is replacing the unsafe keyword.

If anything, this advisory makes it easier to find out about this trick, and accomplishes the exact opposite of its intent of avoiding hiding of unsafe from cargo-geiger.

tarcieri commented 4 years ago

It seems we're pretty polarized on this issue. I see both sides of it as well: it's definitely true this is not a vulnerability in and of itself, however it'd be nice if, somewhere, there existed a list of crates which circumvent static analysis around safety.

There's a case to be made that RustSec isn't the place for it, however in developing the informational advisory feature originally to handle unmaintained crates, my observation was the advisory format is great for cataloging this information.

Furthermore, it's been designed to be quite flexible: the alerting behavior for informational crates can be configured on a category-by-category basis. As it were, if we were to catalogue this advisory as informational, by default it wouldn't be surfaced by cargo audit at all unless the user explicitly opted into doing so via configuration/arguments.

If we do want to go down this road, I think it might make sense to define a new category of informational advisories for this purpose, although I'm a bit at a loss as to what to call it.

Here's what I'd propose as a path forward:

najamelan commented 4 years ago

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings. And I can't find anything on how to configure that in the README. If it were up to me, cargo publish would refuse to publish crates with dependencies which have advisories without requiring a flag, like --allow-dirty does for a dirty working directory.

Running cargo help audit reveals:

    -L, --log-level <log-level>
            The log level for messages [default: warn]

Running cargo audit --log-level notice gives:

error: unrecognized option `--log-level`

cargo-audit 0.11.2
Tony Arcieri <bascule@gmail.com>
Audit Cargo.lock for crates with security vulnerabilities

FLAGS:
    -c, --config CONFIG       path to configuration file
    -h, --help                print help message
    -v, --verbose             be verbose

It doesn't work, but there is now a mention of a configuration file, hmm... Where is the documentation for how to use the configuration file?

I think the point is that people should consent to something like this getting into their dependency graph. Unsafety should be opt in, not opt out. Currently you already need to know about and not forget to run tools like cargo-audit before every release, but if you then also have to know about mysterious configuration settings, it's quite self defeating.

I am fine with notices and warnings returning 0 exit status for CI and such, but for the 0-2 lines of output that an up to date crate should have, is it really that bad to just show everything by default?

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork?

No it wouldn't. But if they add it to crates, anyone who notices can file an advisory for it. And if not they need to use a git dep, something that is visible during compilation and can be more consistently detected by cargo deny. For sure there are many ways to do malicious things that are hard to detect right now. The ecosystem isn't very secure. And it's not really feasible to manually verify every version of every dependency, especially as the ecosystem is young and many crates update often. That's where automated tools come in to alleviate the problem somewhat.

If not it seems like this must be fixed in cargo geiger

People are working on that.

CAD97 commented 4 years ago

@najamelan have you misunderstood what plutonium does? All it does is change how you write the unsafe keyword, from

fn scary() {
    unsafe {
        // code
    }
}

to

#[safe]
fn scary() {
    // code
}

and as such, unsafety is still opt in. It's existence changes nothing about Rust's unsafe model, it just introduces a different way of writing code that adheres to it.


As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of. (Instead, maybe use a crev review of "don't use," which is a level that they do offer.) But more importantly, how do you define an actual cutoff point for what crates deserve that kind of advisory? There's thousands of toy libraries that shouldn't be used in production, do they all deserve an advisory?


And I must stress once again: plutonium is not doing anything sneaky. All it is doing is providing a macro that inserts an unsafe block around the body of the function. It's trivial for me to write a macro_rules macro pro_gamer_move! that does the exact same thing and which impedes cargo-geiger in the exact same fashion.

plutonium does not deserve a security notice. The domain of "bad style" crate advisory is for review, and cargo-crev handles it nicely.


Perhaps also cargo-deny should offer functionality to blacklist crates yourself.

tarcieri commented 4 years ago

@najamelan

I am fine with this being a notice or a warning, but I do find it problematic if that is hidden by default. For one, I have been using and thus to some extend relying on cargo-audit for a while, but I didn't know that I'm not seeing all categories of warnings.

For what it's worth, there aren't presently any advisories in the database using the informational = "notice" setting, so you haven't been missing anything.

Agreed the docs could be better, also there are some unmerged PRs which make the warning functionality for informational advisories more flexible.

@yaahc

Would putting this crate in the advisory database prevent someone from forking and renaming it and then using that fork? If not it seems like this must be fixed in cargo geiger rather than via an advisory.

This is true of any crate an advisory is filed against.

The best we can do is handle crates on a case-by-case basis.

@CAD97

As for having it be a "this crate shouldn't be used in production code" warning: for one, that's a value judgement that a centralized DB shouldn't be in charge of.

I would be perfectly fine with removing the verbiage to that effect from the advisory.

Would you mind opening a PR?

tarcieri commented 4 years ago

I marked RUSTSEC-2020-0011 as informational = "notice" in #278

CAD97 commented 4 years ago

I made the wording more objective in #279.

I still believe that the advisory should be marked obsolete and the description point people to use cargo-crev for this kind of "bad code judgement," but don't have the bandwidth to write the wording currently. The description should probably also note how the advisory got in when it shouldn't (being more a value judgement than an objective advisory, even with my objectivity PR).

najamelan commented 4 years ago

and as such, unsafety is still opt in

It is if you use the crate, not if it creeps into your dependency graph.

Just to be clear. I just reported this to cargo-geiger, because I felt people should know about it, where it was suggested this should be dealt with by the advisory. At cargo-deny they also think this is something that should be handled by the advisory, so I filed a PR here.

Given that this seems to be an uphill battle, just know that it doesn't matter that much to me. Cargo geiger will get fixed, and have got it hardcoded in my cargo-deny configuration. We can agree to disagree, but every time time you do a cargo test and cargo has internet access, you are running random untrusted code on your dev machine. That's the state of affairs today.

Probably some companies do development uniquely in containers/VM, but there probably isn't many given the level of friction it adds. And there is no reason to believe Rust developers pick better passwords than npm developers. Seeing plutonium just reminds me of how dire the situation is and warning people about it is a drop in the ocean, but it's a drop. I'd just hope that collectively we find ways to get closer to improving on this one step at a time.

CAD97 commented 4 years ago

@najamelan if you care about trusting the packages you're running, you should be using cargo-crev and reviewing the code you're downloading. Libraries definitely do not need unsafe to do malicious things, and cargo-geiger should not in any way be a substitute for actual code review.

I suppose you should just go ahead and ban proc-caesar as it also allows writing the unsafe keyword without writing the unsafe keyword, and rc-borrow because it shows up as "clean" in cargo-geiger due to having all of the API produced by a macro.

unsafe is useful as a hint of where to pay specific attention to memory safety, but is not anywhere close to being able to ignore reviewing unsafe-free crates.

You opted in to whatever your dependency is doing when you trusted them to provide a library without doing thorough code review yourself.

It really does sound like what you need is cargo-crev. It even provides the geiger numbers as a guideline towards the most important review targets.

tarcieri commented 4 years ago

@CAD97 cargo-crev is an interesting tool, but it has comparatively lower adoption/reach, and while you may happen to disagree with it, it seems people are also interested in using the RustSec Database for this purpose. I’d also like to point out these tools aren’t a dichotomy: I’d suggest using both.

CAD97 commented 4 years ago

I mostly just hope that the RustSec DB can stick to mostly objective advisories. crev's web-of-trust model is built to handle subjective review and trust; the central authority that rustsec uses is best when the central authority has some amount of (perceived) objectivity.

RalfJung commented 4 years ago

@myrrlyn

No, it is not unsoundness. Unsoundness is specifically a term for the emission of a class of incorrect executable instructions, or a class of semantic input that causes the compiler to construct an incorrect view of the abstract program. It does not apply to the dialect of syntax that viewers observe

You are wrong here. Unsoundness has nothing to do with executable instructions. It has to do with whether safe code can cause UB (and UB, too, has nothing to do with executing instructions unless you mean "executed on the Rust Abstract Machine"). See for example this definition.

Whether or not plutonium is unsound depends on whether the following is considered "safe code":

#[safe]
fn scary() {
    // code
}

Given that unsafe is the signalling keyword for unsafe code in Rust, and given that that keyword does not appear here -- to the contrary, its negation does -- I think one can very reasonably say that plutonium is unsound. Sure, it is "just" introducing a different keyword as alternative to unsafe, but since unsoundness is all about that keyword, I think this is enough.

RalfJung commented 4 years ago

@CAD97

If plutonium deserves an advisory, does my rc-borrow? It's not directly trying to hide it's internal unsafe usage, but because basically the entire API is emitted from a macro, cargo-geiger shows it as not containing any unsafe.

Is it possible to cause UB using that macro? If yes, does the macro name contain "unsafe"? If no, then yes I'd say this is unsound.

I've seen such unsafe-to-use macros before, which internally use unsafe { .. }, and the usual convention seems to be to make unsafe part of the macro name to mark that.

anderejd commented 4 years ago

Hi everyone, the cargo-geiger guy here. Interesting discussion.

In my opinion this issue has very little or nothing to do with cargo-geiger and is only related to the intention of the crate in question. The intention is to subvert the expectations of developers and to sneak code past code review.

That being said, if anyone has the time to add support for macro expansion to cargo-geiger that would be very nice!

Manishearth commented 4 years ago

I'd be surprised if a weird annotation would sneak past code review. The intention is to sneak past automated tools like geiger, making it a geiger issue

najamelan commented 4 years ago

It also sneaks past grepping for "unsafe" which I think is commonly used for finding low hanging fruit in review.

I think the point is that reviewing every version of everything in your dependency graph is often impossible from a resource point of view, so we use heuristics by sheer necessity. In that light anything that only has one purpose, hampering chances for non-exhaustive review, is a very big sign of "possible danger" or "possible bad intent" which I feel it would be worth having a centralized warning for that everyone can benefit from. I don't really understand why that is considered so opinionated.

I definitely don't review every new version of everything when I run cargo update, but if this were to appear in my dep graph, I would definitely take the time to have a closer look. How could anyone wanting to be responsible for what they publish on crates not want to be alerted if this creeped in is beyond me.

Individualized tools like cargo-crev or cargo-deny aren't much help for benefiting from the review work of others. Centralized tools have a much higher practical effect. I don't think the focus on theory is very helpful. In theory cargo-crev could be a good tool for this, in practice it's not. In theory this might or might not violate the soundness rules of safe rust. For practical purposes that's not really the point.

I think the discussion in this issue brings up a few fundamental questions:

  1. How do we move towards a practically safer eco system, together?
  2. Why do people feel like warning about this is opinionated rather than common sense?
  3. Is the problem rather which tool is used for what, than the underlying issue? If so, what other tools could effectively help here as opposed to theoretically.
  4. Isn't there other low hanging fruit, like wouldn't it be nice if cargo publish required passing an extra flag when you try to publish to crates with known vulnerabilities in your deps? Obviously can only work for people who have cargo audit installed, but wouldn't it make sense to ship that by default one day?
anderejd commented 4 years ago

@Manishearth Okay, you think this technique to sneak code past code review would be ineffective, does that matter? Is it not the intention to subvert code review (and code review tools) rather the approximated effectiveness that matter?

Is it realistic to think that we as programmers read every line of code of every dependency of every dependency and fully understand every piece? I would say no, that is for most open source projects at least unrealistic. We do need tools to help us evaluate and review the libraries, be it a simple text search or something more.

The crate in question is a security problem since it aims to make it harder to rely on one of Rust's main selling points over C/C++, that we can search for the "unsafe" keyword during code review.

Manishearth commented 4 years ago

is a very big sign of "possible danger" or "possible bad intent"

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability. I don't dispute that this crate in a depgraph is fishy, I dispute that it is a security issue meriting addition to this database. It's up to tools like crev and geiger to handle this.

Why do people feel like warning about this is opinionated rather than common sense?

Warn about it all you want, just not here, the security vulnerability database.

We do need tools to help us evaluate and review the libraries

Sure. These tools exist, they're cargo-crev and cargo-geiger. This is not a security vulnerability and should not be a part of cargo audit's database, which needs to have a higher standard if we wish to have people actually use it to watch for security issues and not be worried about being swamped with other things. The fact that this was merged certainly reduced my confidence in this database.

najamelan commented 4 years ago

I don't dispute that this crate in a depgraph is fishy

I'm relieved to see we agree on that.

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability.

Nor are unmaintained crates, but maybe the whole concept of the advisory can be re-examined. I think you're always entitled to question existing conventions.

It's up to tools like crev

I don't think crev is a workable solution at the moment. If you want I can explain in detail why. Maybe something else could be, but right now the advisory seems the most appropriate.

and geiger

Won't stop it from getting round grep "unsafe"

be worried about being swamped with other things

I'm sorry, I would like to take you serious, but really? You will only get swamped by it if you actually have it in your crate graph. That being said, any crate you run cargo audit on should probably have at most 0 serious issues and 2-3 low risk issues like fishy and unmaintained crates. I sure think it's desirable that people can opt out of seeing notices, warnings and maybe certain categories in those, or even individual ones by ignoring certain advisory numbers in a config file. I don't think that works today, but that definitely is possible to implement and seems to be intended.

Is that what you call being swamped?

RalfJung commented 4 years ago

It also sneaks past grepping for "unsafe" which I think is commonly used for finding low hanging fruit in review.

Note that the convention I mentioned of using unsafe in macro names when they an introduce UB into otherwise safe code would fix this as well.

So, as far as I am concerned, plutonium would become sound if it renamed its #[safe] attribute to #[unsafe].

Yes, precisely, this is a "security fishiness" metric, NOT a vulnerability.

AFAIK there is precedent for reporting unsound code as a vulnerability here. I do not think that that is per se unreasonable. And by that standard, if we agree that plutonium is unsound, it makes sense to have an advisory.

I think it could make sense to have "unsound" as a separate severity/category for advisories, to distinguish it from more "concrete" vulnerabilities. If we go with that, plutonium would be in the "unsound" category. For example, some parts of this advisory are similarly just about "it's unsound" -- the crate itself never "drops uninitialized memory", but if client code causes panics at the wrong time (of which we do not have an example, but we also did not look for one), then there may be UB.

Manishearth commented 4 years ago

I'm sorry, I would like to take you serious, but really?

I'm talking about the general lowering of standards I perceive here, not just this example. I do not want us to lower the standards of a security vulnerability database.


@RalfJung I covered some of this in this comment

In Rust there are two kinds of vulnerabilities around unsafe: "this API has parameter choices that are undefined behavior/bad memory access" and "this API can be used in such a way in downstream code that leads to it invoking undefined behavior or bad memory access" (in C and C++ the latter is basically always true, vulnerabilities are more of the former kind). Essentially, the difference between "an attacker can exploit this" and "downstream can exploit (or accidentally trip) this". withoutboats goes into some detail on this here

It's already unfortunate we conflate the two. The first is a matter of "everyone should patch asap", the second is a matter of "downstream code should be audited for problematic use, and should trigger the first kind of vuln on those" (though ideally everyone patches anyway). Ideally as a community we have ways to talk about these differently because the implications, severities, and social dynamics are different. We don't, but that's not the problem at hand.

Plutonium is not in either category, it is in the category of "this API can be used in such a way that it can create APIs that can be used in such a way that downstream might have UB". This is another level of meta, and to me it's very subjective if that class should be considered as a class of vulnerabilities. It's only being considered as a member of the "downstream can exploit this" class because it's actually "downstream can exploit this to circumvent cargo-geiger", which is a very different statement and not indicative of a vulnerability.

I do wish we distinguished between what you're calling concrete vulns vs unsound, but I think plutonium is a part of a third category that's an additional layer of meta.

RalfJung commented 4 years ago

I do wish we distinguished between what you're calling concrete vulns vs unsound, but I think plutonium is a part of a third category that's an additional layer of meta.

I disagree, I don't think such a layer even exists -- every "second-order unsound" library is also "first-order unsound".

Plutonium is "first-order" unsound, as demonstrated by this "safe" program that is causing UB:

#[safe]
fn nasty() {
    *(std::ptr::null_mut::<i32>()) = 0;
}

fn main() { nasty() }
Manishearth commented 4 years ago

It's second-order unsound because it is behaving as advertised, but the advertised behavior is bad.

With first-order unsound bugs I accidentally need to use the crate in my crate in a way that I expect works but is actually broken under the hood, and I need to be notified so I can audit my crate, and upgrade if necessary (Ideally, I should always upgrade, but it's not always possible, especially if a patch doesn't exist yet).

There's no "oops that's actually broken" moment here. Whoever adds plutonium to their crate knows exactly what they're doing. It's their downstream that need to worry about it being introduced in their dep graph, which is what makes it second order, to me.

This is a social system, the intent matters; you have an additional layer of intentional abuse involved before this crate becomes a vulnerability.

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

tarcieri commented 4 years ago

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

This is exactly how the informational advisory feature works. It introduces a second class of advisories which are explicitly distinct from advisories for security vulnerabilities.

How informational advisories are surfaced to the user is configurable. Right now only informational advisories for unmaintained crates are surfaced, and are surfaced as warnings instead of as explicit "vulnerabilities".

The only other class right now is a catch-all notice class (which despite its name is not surfaced to the user at all, although perhaps it should). The enum used for informational advisories is explicitly open-ended by design though: we could add a unsound categorization, and decide on how to surface those.

Notably, cargo deny also consumes the RustSec Advisory DB, and could potentially error if unsound crates if configured to do so.

Manishearth commented 4 years ago

This is exactly how the informational advisory feature works.

I actually think the "unsound" class is typically more severe than "unmaintained". Not in this case, but in general.

In this case it requires another layer of intent so I still feel it's a different class and should not be the subject of this db.

RalfJung commented 4 years ago

It's second-order unsound because it is behaving as advertised, but the advertised behavior is bad.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional. I do not buy that argument. Sure this is a social system, but it risks becoming meaningless if we do not try to uphold some objective standard. "Soundness" is the objective standard on which the entire Rust ecosystem rests. The social part is the one where we all agree (or not) that soundness is important, and unsoundness a bug.

"(Un)sound" is an objective technical criterion, on which crate author intentions have no influence. (The only part I think that is subjective here is how to mark a macro as being "unsafe to use"; I do not know if "it should have unsafe in its name" is the consensus opinion.) Plutonium is unsound. Documentation can point out that it is deliberately unsound, but no amount of documentation can change the fact that it is unsound. Thus, is should have an advisory filed with whatever treatment we give to soundness bugs.

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

I'd be way more comfortable with this if we distinguished between "concrete vulns" and "unsound", but we don't, so I'm wary of expanding the definition.

I'd say we should make that distinction. If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

CAD97 commented 4 years ago

@RalfJung, here's a thought experiment:

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

Believing this strongly, I publish a new crate, trusted, which provides a single macro:

#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

with the intent of using this macro instead of unsafe blocks for when the unsafety is properly and fully contained within the block.

Is this an unsound crate? Does it deserve an advisory, informational or not? I'd argue that it adds an important distinction between encapsulated unsafe and "leaky unsafe" that also relies on safe code outside the block for correctness.

I ask this because plutonium::safe is the exact same API, just applied as an attribute to a function rather than a block style macro. I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'd find it unfortunate if we said macros were sound or not based on whether their name contains the string unsafe.

If a macro is intended to always be safe to use, and in some cases allows you to execute unsafe code without upholding the preconditions, then that would be a soundness issue.

If a macro is intended to be unsafe to use, even if it doesn't contain the string unsafe in it's name, I don't think its existence is unsound. You can argue that it's a bad crate, sure, but not that it deserves to be linted out of existence.

It's sort of blunt to put it that way, but it comes down to the fact that you need to read the docs for the macros you're using, because they can do anything. (Including, but not limited to, stealing your Bitcoin wallet at compile time.)

I'm in full agreement with @Manishearth here; you can use plutonium to write unsound, potentially vulnerable code, but plutonium itself is not unsound nor potentially vulnerable.


This is definitely on the edge. proc-ceasar is, honestly, probably also on the edge. The main worry here is, imho, that accepting these "maybe" cases into the central authority decreases trust in said central authority.

It would definitely benefit RustSec if it could provide objective rules for when something definitely does deserve an advisory, and some objective rules for things that definitely doesn't deserve an advisory, and should be delegated to more opinionated services like crev, geiger, and/or deny. Even if the middle ground is fuzzy, it's beneficial to have clearly defined areas and guidelines that can be used to make decisions for the middle area.

I would personally put the useful "yes this is advisory worthy" main goalposts at


Just a side note: to those saying crev is "worse" than RustSec for this kind of thing because it doesn't have a central authority, for one that's sort of the point, that it doesn't need one. But more importantly, nothing prevents the use of a crev proof repository as a central authority.

And yes, crev struggles to bootstrap itself to being useful, due in part to adoption. But if people don't adopt crev because it doesn't have adoption, it'll never get that adoption in the first place.

CAD97 commented 4 years ago

A final small wonder: is this (at least in part) due to documentation? If plutonium were more diplomatic in it's documentation, would that change your opinion of its worthiness?

I will fully admit that the documentation of plutonium is less than diplomatic; its expressed purpose is to "make everything less safe".

But I think the RustSec central authority should at least attempt to avoid making calls based on the diplomacy of documentation. There is, imho, nothing inherently wrong with using macros to create a dialect of Rust where unsafe is spelled differently.

najamelan commented 4 years ago

I think unsafe fn and unsafe { } are different enough that they should have different keywords; the former expresses a requirement on the user to remain safe, and is potentially unsound to call; the latter is an upholding of requirements, and is never unsound to execute. (Assume I've put things in micro modules to eliminate safe code that can break safety invariants.)

For more, see https://doc.rust-lang.org/nomicon/safe-unsafe-meaning.html. From this link:

The need for all of this separation boils down a single fundamental property of Safe Rust, the soundness property:

No matter what, Safe Rust can't cause Undefined Behavior.

So what ever I throw in the macro from plutonium, it can never cause any unsoundness, because the compiler has my back right? Unfortunately @RalfJung has already demonstrated that is not the case.

By not requiring you to use the keyword unsafe, the author of plutonium guarantees that their implementation takes over from rustc in guaranteeing that whatever you put in it will be checked for safety. That is unless we question a very well ingrained contract in the rust language and community.

Since that is obviously not the case, by using the macro, you say to any onlookers, "nothing to see here, rustc has your back, move on". Which is lying, because you just turned off rustc without wanting us to know that you did. That alone doesn't inspire trust if you ask me.

The difference between unsafe fn and unsafe {} is that in the former the callers code needs to be audited. In the latter, if this happens in a pub fn, whatever the caller does, if they don't use unsafe can never introduce unsoundness. It is the code that implements the function that needs to be more carefully scrutinized for errors. So if you want your users to not have to worry about making mistakes provide them with a safe API. No need for a macro for that. Just don't try to hide the fact that you are playing with fire inside their house.

All in all, the two are conceptually very similar. In unsafe {}, you are the caller, in pub unsafe fn you delegate responsibility to a downstream caller. That's all. We mainly want to know about the former. The latter still means that the documentation you wrote about how to use it correctly might be wrong and still allows creating unsoundness even if users follow the docs correctly, so you still have some responsibility and you can still make things blow up, but less likely than in the former. And by definition it's obvious that something unsafe is happening, so it is less dangerous in that sense. Users will often check the code as well as the docs, because it's obvious they have some responsibility if something goes wrong.

I seem to recall safety-dance discussing potentially providing a version of #[safe] that takes a review identifier and a hash of the code, and prevents compiling the code if it changes from the reviewed. It'd not be incorrect if that attribute also made the body of the function an unsafe block, as it, by existing, points to a review showing the annotated function safe.

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

As for reviews, just put a comment on it with a link to the review, or have the actual review text and name of the reviewer in the comment. Of course, if you later change the code, the review is no longer valid and I imagine that's what the discussion was about. No need to remove the unsafe keyword though.

CAD97 commented 4 years ago

I'm not entirely sure I get what you mean here, but you can have pub fn, no mention of unsafe on your API that inside the function body uses unsafe {}. That is, unsafe is not contagious.

And that's what the #[safe] attribute in plutonium is, once you strip away the documentation's presentation of "make everything less safe" anyway.

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}. Inside the function annotated by #[plutonium::safe], you're not writing Safe Rust, you're writing Unsafe Rust. If you accept that #[plutonium::safe] is an "unsafe macro" (which the language has no definition of, so falls entirely to documentation), then there is nothing wrong with this.

To be clear, I know the difference between unsafe blocks and functions. That's why I think that you can argue for my thought experiment trusted! as being a meaningful distinction. #[plutonium::safe] is then the exact API, just as an attribute rather than a block macro.

So let me ask you directly, @najamelan:

Say I (or @myrrlyn, who has expressed that using this sort of macro would make his crate maintenance easier by separating "kinds" of unsafe) release the trusted crate, whose entire body I've inlined below.

/// Alias for an `unsafe` block.
///
/// This should be used to mark `unsafe` blocks that are not in any way dependent
/// on external safe code for correctness. That is, any unsafe preconditions should be
/// upheld solely by code within this block and safety invariants of the code it calls.
#[macro_export]
macro_rules! trusted {
    {$($tt:tt)*} => {unsafe{$($tt)*}};
}

Would you see it fit to submit a security advisory for that crate? After all, I can write

#[macro_use]
extern crate trusted;

fn main() {
    trusted! {
        *std::ptr::null::<u8>();
    }
}

and cause UB without the string unsafe anywhere in my code.

If you would, I strongly disagree with you. trusted! is explicitly a different way of spelling unsafe that communicates a different intent than just an unsafe block.

If you wouldn't, what makes this any different than #[plutonium::safe]?

najamelan commented 4 years ago

Would you see it fit to submit a security advisory for that crate?

Yes

It is just a transform from #[safe] fn foo() { /* code */ } to fn foo() {unsafe{ /* code */ }}

I think I really don't understand what makes the former desirable over the latter. Eg. what problem are you trying to solve? If you want to distinguish between "different kinds of unsafe" in your crate, just put a keyword of your choice in a comment, then you can even search through your codebase for it, no? No unsound, no lying to reviewers, ...

tarcieri commented 4 years ago

@RalfJung

We could decide that unsoundness is not a problem when it is intentional. That is how I understand your argument. I strongly disagree with that.

This seems to be the core argument here. I am inclined to agree with you, with the caveat that deliberately unsound APIs do not contain a “vulnerability” per se (but are still fundamentally “unsound” from an objective, technical perspective).

I feel there is some baggage from the way this particular advisory was filed it’d be nice to separate discussion around the above topic on, if anyone wants to open a new issue for discussion (not to mention this issue is closed)

CAD97 commented 4 years ago

@najamelan and I don't see the purpose of requiring unsafe code to use the unsafe keyword when this is the reason that Rust has its macro system. Nothing is unsound about trusted!, unless you have a draconian view of soundness the Rust language is unsound because it allows you to cause UB in unsafe blocks. How is writing trusted! different from writing unsafe for terms of soundness of API?

I personally agree that it's probably not worth it to use trusted! rather than unsafe. But I don't think the answer is to uniformly declare trusted! as evil and attempt to lint it out of existence. (And to be clear: that is what a RustSec vulnerability advisory is (perceived as): an attempt to lint (the unfixed versions of) a crate out of use.)

In any case, I'm not going to argue this point any further one way or another. I agree that you probably shouldn't be using plutonium, after all!

What I disagree with is trying to ban plutonium or any other crate that tries to offer a different way to scope unsafe. Experimentation is a good thing. I see linting plutonium away as roughly equivalent to linting fehler away: many people would prefer that both of these crates aren't used. But the macro system exists specifically so that people can create micro scoped dialects that express their problem domain better than macroless Rust.

Manishearth commented 4 years ago

If we did, would you object to an "unsound" advisory being file for plutonium? (EDIT: ah, looks like you do not. We remain in very strong disagreement then.)

If we did, I would still dislike it, but I would not be bothered to actually object.

That's like saying a buffer overflow is not a bug when the crate says that it is intentional.

A buffer overflow is different, though. If a crate intentionally causes a buffer overflow, it should be marked as a soundness vuln. This is one level removed, this crate enables others to intentionally escape unsafe. People could, for example, use it the way @CAD97 (and earlier, @myrrlyn) describe, as an "alternate unsafe block" with different semantics and it would be totally safe. Hell, we've had multiple discussions in the past whether or not unsafe should have been called assert_safe, safe, trustme, trusted, or another one of many words.

Intent matters because the people writing intentional bugs do not need to be warned, their downstream does. If someone uses plutonium for bad, they should absolutely have their crate placed in this db.

(I actually really want the converse of this crate, a macro that converts fn foo(args) {body} to unsafe fn foo(args){ fn foo_inner(args){body}; foo_inner() } so that we can experiment with https://github.com/rust-lang/rfcs/pull/2585 )

najamelan commented 4 years ago

attempt to lint it out of existence ... trying to ban plutonium ... I see linting plutonium away

I apologize if the wording in the advisory that it shouldn't be used in production was to strong and to opinionated, and I have no problem with that being changed, but other than that, I really don't see where you are coming from.

Nobody is trying to ban anything or lint it out of existence. Only make sure that people are conscious that something really fishy slipped into their dependency tree and that they are ok with that. If there are many like you, it wouldn't be linted out of existence at all. It does sound a bit like the argument that GMO usage shouldn't be labled in ingredient lists because people won't want to buy it.