Closed notriddle closed 8 years ago
These methods are stable, they cannot be marked unsafe (almost the entire ecosystem would break). I guess things like /proc/self/mem
are out of Rust's safety scope.
According to the stability document:
We reserve the right to fix compiler bugs, patch safety holes, and change type inference in ways that may occasionally require new type annotations.
may occasionally require new type annotations
(okay this just refers to inference changes)unsafe
is neither a type annotation nor would it happen "occasionally" since these are so fundamental APIs
It's the safety holes part that justifies this. This is a safety hole. It's essentially the same as the scoped thread API, only the scoped thread API was removed from Rust before it was stabilized.
I wouldn't consider this a safety hole. The fact that harmless file system operations can change arbitrary memory is unfortunate, but there will always be ways to somehow circumvent Rust's safety guarantees with external "help".
The scoped thread API was a process-internal API not provided by the OS but by the Rust standard lib alone, and was only unsafe because of wrong assumptions made while it was designed.
Marking a stable function as unsafe would probably be covered by the quoted wording even if the function was widely used, but that doesn't mean it's a reasonable interpretation in this case. Making opening a file unsafe is so baldly ridiculous that I am asking myself if this an April's fools joke.
Surely it is possible got around the symlink thing and avoid opening /proc/self/mem
for writing?
It proposes to make println!
unsafe. Am now reminded of why I don't like April's fools.
You can also use safe std::process::Command
to launch an evil program which invades your process' memory and corrupts it.
What's worse, you could call unsafe code from safe code! This can only lead to security holes! Let's require unsafe { ..}
around all the code and define every function as unsafe to remind programmers that it's a scary world out there...
The obvious solution is of course not to mark the methods as unsafe, but to drop Linux and Windows support and only support Redox (it's written in Rust so it must be safe) ;)
The obvious solution is of course not to mark the methods as unsafe, but to drop Linux and Windows support and only support Redox (it's written in Rust so it must be safe) ;)
That won't do. Redox uses unsafe
. To make sure such mistakes don't happen again, we must stop using unsafe
, perhaps even remove it after an appropriate waiting period (say, two releases). This will require a rewrite of the unsafe parts of the standard library, but I'm sure the compiler team will be receptive to migrating all the functions that currently use unsafe
to compiler intrinsics (which will be safe because the compiler is never wrong).
FWIW I've long harbored thoughts that "safe" Rust does not go far enough. Removing unsafe
is only the first step, supposedly "safe" code can still take incredibly dangerous actions (e.g., std::process::Command
can be used to invoke rm -rf /
). The Rust designers were wise to identify shared mutable access as a huge source of problems, but side effects are still permitted despite the overwhelming evidence that most unsafe code has side effects. Consequently, I'm currently drafting an RFC to make Rust a pure, 100% side-effect free language. main
will become a pure action of type impl Unsafe<()>
.
... aw crud. That requires a Monad
trait. Yet another RFC blocked on HKT!
Except that deleting everything is not unsafe, while poking around arbitrarily in memory is.
That is unfortunate short-sighted corner-cutting. rm -rf /
is the number one threat to cyber security. Ignoring it or declaring it out of scope will just diminish Rust's importance.
Won't rm -rf /
also delete /proc/self/mem
at some point? :joy:
Nope. I get permission denied when I try to rm /proc/self/mem
, even as root.
Since this seems to be an that April’s thing, and the thing’s over, closing. If my conjectures are incorrect, complain 😊
tbh if the code above actually works, maybe it should have an issue for real. I think it may be considered a soundness hole, although it may not be worth fixing. It shouldn't be this issue though, the silly will detract from serious discussion.
I guess this is similar to creating a safe wrapper for a C library, even if an unusual input is given, and it triggers a bug/memory error in the library, the interface should remain safe.
The example works.
On Sat, Apr 2, 2016, 16:06 Marcell Pardavi notifications@github.com wrote:
I guess this is similar to creating a safe wrapper for a C library, even if an unusual input is given, and it triggers a bug/memory error in the library, the interface should remain safe.
— You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub https://github.com/rust-lang/rust/issues/32670#issuecomment-204818647
I guess I’ll nominate this for libs team to see (since it is technically a way to circumvent memory safety), but I doubt the outcome will be any different.
Sorry for the premature close!
It shouldn't be this issue though
It doesn’t really matter IME, because the core of the issue is still here.
I think triggering memory unsafety from outside (using filesystem, other process, etc.) is not in scope of Rust's safety gurantee.
Imagine implementing a library for controlling a repair robot. It has long flexible manupulator which can grab things, solder, connect to pins and do JTAG, etc. Should functions like Manipulator::raise
or SolderingIron::activate
be unsafe? No.
But one can program the RepairBot to loop back the manipulator to the robot's own back, open the lid, solder to JTAG pins and then trigger overwriting memory, hence causing memory unsafety in Rust terms.
More close-to-earth example: imagine implementing debugger like gdb
or scanmem
in Rust. Debugging other programs is safe, but debugging oneself is not.
When triggering the unsafety involves some external component like filesystem, it is out of scope for Rust's safe/unsafe. You can't reliably detect a Strange loop.
It's not like it's going to be fixed, whatever happens.
It is the responsibility of the one running the program to provide a safe interface to the operating system and hardware. Rust cannot guard against your computer literally blowing itself up because your program executed some booby-trapped instruction.
Are you saying that play.rust-lang.org, not to mention basically every Linux distribution in existence, is misconfigured?
Your program should be in a sandbox denying access to any file it does not need, if you don't want this to happen. rm -rf ~
is worse than most undefined behaviour. BTW, it seems your example does not segfault in release mode.
@vks, rm -Rf ~
is a predictable, reproducible, reversible (if there are backups) data loss.
Undefined behaviour on the other hand can lead to malware code execution (which, for example, sends private data from ~
somewhere) is unpredictable, poorly reproducible and may be irreversible.
rm -rf ~
is a valid outcome of undefined behavior (as is anything else...)I do not buy the suggestion that memory unsafety is okay if it's caused by interacting badly with the operating system or specific hardware. Certainly it is unacceptable for a safe Rust library to say, invoke the read
system call with an invalid pointer, overwriting arbitrary memory. The write is performed by the OS, but it's clearly both the fault of the Rust code and in its power to prevent that from happening. There's an often-repeated promise: that a Rust program that uses the standard library and no unsafe block of its own is memory safe. This promise is not fulfilled here.
The point of a safe Rust interface is to offer precisely that, a memory-safe interface, an API that Rust code can use without worry of memory unsafety, in all circumstances, whatever silly mistake the programmer might make. The point is not to assign blame but to isolate the unsafety and fight it, so that the end result are safer, more reliable programs. Therefore, functions that are not marked unsafe
are held to a very high standard, no matter how contrived the circumstances in which they might misbehave. For example, nobody in their right mind would go out and leak four billion Rc
s and overflow the refcount, but the standard library still got patched to prevent memory unsafety in that case.
Now, clearly, Rust cannot take care of everything. It can't prevent side channel attacks. It can't prevent authentification code from containing a logical bug that gives every user full privileges. It can't forsee a hardware error in future generations of CPUs that will cause it to write to address 0xDEADBEEE
when a mov
is executed with a destination of 0xDEADBEEF
and pre-emptively add a check to all memory writes. It can't prevent a human (or a robot) from making the most of physical access to the hardware. It can't do a billion other things that are important for the secure and correct operation of computer systems. But that does not mean its responsibilities already end at the process boundary.
If a safe function in the Rust library is called, and it ends up overwriting memory of the running process, then that is a memory safety issue. It doesn't matter one bit if it goes through the OS, specifically, through the mock /proc
file system — the system call works as documented and advertised. This is not a case of a single machine being ill-configured or a bug in an obscure pre-release version of a third party library. It is fully expected and works on millions of machines.
That's cool and all, but there's a reason I closed this bug; this particular safety hole is infeasible to fix. Other "memory-safe" languages (Python, Haskell, Java) share this hole, so there probably no easy way to fix it, and the hard ways of fixing it would get in the way far more than they help. (Marking the file open APIs unsafe would just be stupid.)
@notriddle It probably wasn't clear from my rant, but I am standing by my earlier position of "this may very well be not worth fixing". Like you, I am skeptical that it can be reasonably prevented. But several recent comments in this thread seem to veer too far into another direction, sounding like "apologists" for memory unsafety so to speak. I am strongly of the opinion that leaving such a hole open must come from a pragmatist "We'd like to fix it but it's just not practical" place, not from a "It's hard to trigger and not our fault so whatevs :shrug:" direction.
@notriddle: can’t open()
simply be patched to return an Err
on opening /proc/self/mem
for writing on Linux? Seems simple and fine.
@sanmai-NL , What if procfs
mounted to /tmp
and tmpfs
mounted to /proc
?
Also imagine an operating system that has socket analogue of /proc/self/mem
. You connect
to special address and socket becomes a view into memory of your own process. Now sockets are unsafe or must check where we actually do connect (and netfilter-like mechanism can redirect addresses)?
The main issue that OS-s allow memory introspection and it should not be considered Rust-unsafe.
Even more intricate example: imagine a robotics library for a electronics repair robot that allows user code to control special manipulators that move around using motors and connect to pins (e.g. JTAG) of various boards around. But what if we command it to connect to JTAG of the same board that is controlling the robot ("self-repair mode")? Now we can read/write any memory, including one mapped to Rust process. Does it make motor-controlling or pin digital input/ouput functions Rust-unsafe?
There are also limitations/defects in common hardware, such as those exploited by https://en.wikipedia.org/wiki/Row_hammer to modify arbitrary memory. Does rowhammer imply that all memory accesses should be considered unsafe? XD
Could we at least have a way to mark when rust code depends on external code( os, c libs, etc. ) ? So maybe when a function contains unsafe, it gets marked in the docs, and that marker spreads to any function using it, so that way you have a reliable way of determining if there could be unsoundness because of external code. So the idea is that if you only use functions without the marker and no unsafe you shouldn't have to worry about soundness at all, other than the rust compiler containing a bug or the machine the code is running on malfunctioning. I know this isn't that useful since a lot of functions will end up getting marked, but at least it's a decent indication of how much you can trust a function with arbitrary inputs to not corrupt memory.
@Error1000 TBH, in that light you can consider any rust code unsafe. A big chunk of the stdlib will depend on libc and many low level crates that are pervasive throughout the ecosystem use unsafe.
I don't think you can write any non-trivial rust program that does not have unsafe in it's dependencies.
As for the problem at hand, I don't agree it's just outside of the scope Rust. From an OS perspective a program is allowed to do pretty much with its own memory what it wants. It's the Rust paradigm to put limitations on that and require the unsafe keyword. libc is one interface to the OS which allows a ton of things for which Rust requires unsafe and here it turns out that /proc/self/mem
is another one.
I also don't see a clear argumentation as to why this can't be fixed, but them I'm not a linux filesystem expert. Sure std::fs::File::open
and friends could refuse to open /proc/self/mem
. There are further holes by using (sym)links, mount points etc. But it might be possible to compare inode numbers for example to detect that I presume. This is where my expertise right now falls short, but I'm not convinced we can't do better.
The question then becomes whether it is possible to close this hole and whether the perf overhead in these functions is acceptable. It would also require unsafe equivalents of these functions to allow opening files like /proc/self/mem
.
I think panicking whenever the mem file is opened would be a viable solution to this problem
@botahamec How would you do that? Note that it's possible to create symlinks, as mentioned in the original issue description.
For anyone following along here, I've now posted https://github.com/rust-lang/rust/pull/97837 to propose documentation for Rust's stance on /proc/self/mem.
@najamelan Comparing inode numbers isn't trivial, because procfs can be mounted in multiple places, and when it is, each instance has its own inode numbers. Also, self/mem isn't the only dangerous file; there's also self/task/\<tid>/mem, each with its own inode number, and threads can come and go dynamically. With enough cleverness, it may ultimately be possible to design a system which reliably detects whether a File::open
call is opening a procfs mem
file, but such a system would add overhead that most users don't need, it would be vulnerable to OS's adding new features in the future, and it wouldn't completely solve the underlying problem of programs reaching out to the outside world and causing things to reach back in.
This program writes to arbitrary memory, violating Rust's safety guarantees, despite using no unsafe code:
Because the filesystem APIs cannot be made safe (blocking
/proc
paths specifically will not work, because symlinks can be created to it),File::create
,File::open
, andOpenOptions::open
should be marked unsafe. I am working on an RFC for that right now.