Open RalfJung opened 1 year ago
I would certain like for it to be undefined behaviour for a few reasons:
I would like to be able to put xlang's const on lowered local variables for non-mutable bindings. This would allow the value of the local to be optimized past an opaque function call given addr_of!(x)
rather than &x
, which I would expect to be the case (thus allowing the value to be constant propagated, or potentially stored in a register that doesn't get spilled by the call).
Going to the Generator/Future and Closure questions, I would like to be able to do const-capture elision, and the best way to define it would be
If the nth capture does not have it's address taken by the closure body, and is a non-mutable binding initialized by a constant expression, the capture type
Tn
is()
, and uses of the capture's name substitutes the value of the constant expression.
The definiton could rule out using addr_or!()
on the binding before the closure, but that just gets complicated. See:
let x = true;
if false{
unsafe{call_opaque_function(addr_of!(x))} // does this affect the closure layout?
}
(|| println!("{}",{x}))();
(loops do even more wacky things)
the value of the local to be optimized past an opaque function call given addr_of!(x) rather than &x
Right, that is the key optimization difference here (for the case where the opaque fn call takes that ptr-to-x as argument).
I just don't think this optimization is important enough to justify introducing a whole new kind of read-only allocation. Specifying these "read-only" locals will be tricky since of course they are written to, even more than once, to receive their initial value:
let x: (i32, i32);
x.0 = 0;
x.1 = 1;
I think we do want addr_of!
/addr_of_mut!
to not generate a fresh tag and just return an exact alias of the pointer they start with. This is pretty much required to fix https://github.com/rust-lang/unsafe-code-guidelines/issues/134. If we take that as a given then making addr_of!(local)
not mutable cannot be done by the aliasing model, it has to be a property of the allocation itself. Moreover the allocation cannot be read-only, it would have to be something like write-once-per-location... or we need an explicit MIR statement saying "now mark this memory read-only" (because it has been initialized).
If the nth capture does not have it's address taken by the closure body, and is a non-mutable binding initialized by a constant expression
This issue here is only relevant for cases where the address is taken so this seems to be orthogonal.
Also I am fairly sure we don't want any rules that look anything like this in our op.sem. "Local has its address taken" is not a property that is stable under optimizations. Capture elision, like other optimizations, should fall out of the general properties of the model -- it should not itself be part of the spec.
On Sun, May 14, 2023 at 10:38 Ralf Jung @.***> wrote:
the value of the local to be optimized past an opaque function call given addr_of!(x) rather than &x
Right, that is the key optimization difference here (for the case where the opaque fn call takes that ptr-to-x as argument).
I just don't think this optimization is important enough to justify introducing a whole new kind of read-only allocation. Specifying these "read-only" locals will be tricky since of course they are written to, even more than once, to receive their initial value:
let x: (i32, i32); x.0 = 0; x.1 = 1;
I think we do want addr_of!/addr_of_mut! to not generate a fresh tag and just return an exact alias of the pointer they start with. This is pretty much required to fix #134 https://github.com/rust-lang/unsafe-code-guidelines/issues/134. If we take that as a given then making addr_of!(local) not mutable cannot be done by the aliasing model, it has to be a property of the allocation itself. Moreover the allocation cannot be read-only, it would have to be something like write-once... or we need an explicit MIR statement saying "now mark this memory read-only" (because it has been initialized).
Hmm... Yeah. Partial Init is a problem. For full init, the allocation itself could come into existence when it's initialized possibly. My main reasons I would like it is if it's initialized with a constant expression (or, at least, something I can evaluate as one), but separating that circumstance is definitely challenging.
If the nth capture does not have it's address taken by the closure body, and is a non-mutable binding initialized by a constant expression
This issue here is only relevant for cases where the address is taken so this seems to be orthogonal.
Right, but the difference here is whether the address is taken outside of the closure or inside. I'd really like to be able to make the capture rules for layout purposes only care about the initial declaration of the binding and the body of the closure itself.
Also I am fairly sure we don't want any rules that look anything like this in our op.sem. "Local has its address taken" is not a property that is stable under optimizations. Capture elision, like other optimizations, should fall out of the general properties of the model -- it should not itself be part of the spec.
Yeah, I should have clarified, this is a potential rule in a future version of lccc's abi spec, not rust. It does apply it's decision making pre-optimization, since the point is that optimizations shouldn't affect something that can move across cgu's in an unstable way. I'd like to be able to justify having that rule in the first instance, since imo const-propagation capture ellision is a useful layout optimization, and that rule is a non-complex way of defining it.
— Reply to this email directly, view it on GitHub https://github.com/rust-lang/unsafe-code-guidelines/issues/400#issuecomment-1546914955, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGLD2ZOCFSHBIVIU7SNWE3XGDU53ANCNFSM6AAAAAAYBC4KBA . You are receiving this because you commented.Message ID: @.***>
Another related but I think distinct potential reason to prohibit mutation of a mut
-less binding is that it permits the value to be promoted to a static. A couple other things also need to be true for the transform to both be valid and potentially desirable[^1], but this is certainly a potentially useful transformation to be able to apply.
Basically, being able to constant-propagate a static promotion as an optimization, rather than requiring the developer to notice it can be and ask for it to be const evaluated to get access to the potential promotion benefit.
[^1]: Namely: at least that we don't guarantee (non-async) locals' address to be in the stack region, and some amount of relaxing address uniqueness (so multiple statically-immutable locals with the same value can be colocated). The transform is perhaps most interesting as a way to optimize constant values out of futures' witness tables (and into the static memory region).
We can just not do that promotion if we see an addr_of!(var)
. This should be an easy analysis.
As the maintainer, co-author, and also user of an unsettling and increasing amount of unsafe
code that is expanded under cover of macros, I do not want the following code to ever complete:
let x = 5;
let stuff = construct!(x);
stuff.do_something();
assert_ne!(x, 5);
println!("if this code is reached, modular reasoning got axed.");
I would prefer the assert-not-equals line to always trigger its associated panic.
And I do pass (via pointer) a considerable number of values to FFI that are then left unmodified, and I would prefer that I not have to reason carefully about optimization rules when picking between &T
and ptr::addr_of!
. I prefer the much-more-boring reasoning of "did I want a pointer, or did I want a reference?" than asking questions like "what exciting nuances of the opsem model will bleed through into my code's compilation today?"
No unsafe code programming problem is made better by adding more UB. So I am not sure what you are arguing for here.
Making it defined means promising to compile it which means people will start introducing things which rely on it working.
Making it undefined is worse, it turns this into non-debuggable bugs. UB exists for optimizations and nothing else. (And sometimes as a way to paper over platform differences that we haven't found a better way to explain away yet.) Using it for anything else is just the worst kind of bug amplification.
We're talking at the level of validity invariants here, not safety invariants. If you write a library abstraction, the client's rules (safety invariant) are yours to define and default to "what can be done in safe code". Safe code cannot mutate these variables, so your libraries are not affected.
In practice people do in fact notice and report "the compiler discarded a write" when they pass &something
to FFI (when they should have passed &mut T
or something). It might be non-debuggable according to the formalization that says the behavior is non-predictable at that point.
We're talking at the level of validity invariants here, not safety invariants. If you write a library abstraction, the client's rules (safety invariant) are yours to define and default to "what can be done in safe code". Safe code cannot mutate these variables, so your libraries are not affected.
And in practice, people do write macros that expand to unsafe {}
blocks, including FFI calls, thus making it so that they appear safe to use, and then someone else has to audit that unsafe code for its correctness.
And programmers are not unwilling to brute-force-search for code that works.
basically, in order for someone to reason "xyz var is never mutated", even if they know the monomorphic type, they must now prove a negative: no one ever used ptr::addr_of!
on this ident. if the source is obfuscated by overly-clever constructions, this can be a very long search, especially if the source already has reasons to fling pointers around.
comparatively, the search for "is this let mut
or is this type UnsafeCell
?" is much shorter, in my experience, since I know exactly where to look for each.
...I suppose they could also declare every single "no, really, it's immutable" variable as a constant but perhaps Rust programmers would rather not BE_CONSTANTLY_SCREAMING
.
basically, in order for someone to reason [...]
We're discussing the reasoning principles permitted to the compiler here, not the reasoning principles permitted to programmers.
And programmers are not unwilling to brute-force-search for code that works.
That's a doomed approach in a language with UB, it makes zero sense to try to account for it. Furthermore, even if we did add the extra UB you are asking for, things would still seem to "work" in many cases when you don't want them to. So this argument doesn't even support (my understanding of) your position.
And in practice, people do write macros that expand to unsafe {} blocks, including FFI calls, thus making it so that they appear safe to use, and then someone else has to audit that unsafe code for its correctness.
These are true statements and I fail to see the relation to this discussion. People also write safe functions that contain unsafe blocks that cause UB -- people have bugs in their code. The notion of soundness for macros is a bit unclear, but certainly does not involve having to study the code the macro expands to, any more than the notion of soundness of a safe function involves having to study the code inside that function.
I honestly don't understand what kind of API you're even concerned about here. The code you sketched above doesn't help me. But as a general principle there are large priors against a soundness concern motivating more UB. That's like fixing someone's trouble with their door handle by just blowing up the entire car.
I just don't think this optimization is important enough to justify introducing a whole new kind of read-only allocation. Specifying these "read-only" locals will be tricky since of course they are written to, even more than once, to receive their initial value:
let x: (i32, i32); x.0 = 0; x.1 = 1;
I copied this into the playground and it doesn't compile (E0381 'partially assigned binding
x
isn't fully initialized'). As such it seems you can only write to an unitialized let binding once.
As far as I can tell, it is also impossible to take a pointer to an uninitialized let binding. If you cannot write to a let binding more than once as it's initialization and you cannot take a pointer to an uninitialized let binding, then any pointer to a let binding must have already be initialized and therefore allowing writing to it would be as surprising as allowing writes to an initialized let binding directly (eg allowing let x = 0; x = 1;
to compile).
I copied this into the playground and it doesn't compile (E0381 'partially assigned binding x isn't fully initialized'). As such it seems you can only write to an unitialized let binding once.
There was a proposal to allow this, though I can't find it right now. It's a fairly natural language extension that we shouldn't prevent: basically, just treat initialization entirely per-field, and once all fields are initialized, consider the entire value initialized.
Also, even just the single write that we already permit to let
bindings is problematic enough to make this highly non-trivial to specify:
let x: i32;
if b { x = 13; }
Also, for MIR semantics we definitely want to permit deaggregation, where this
let x: (i32, i32);
x = (0, 1);
gets transformed into
let x: (i32, i32);
x.0 = 0;
x.1 = 1;
Partial initialization is still kinda besides the point if you can only take a pointer to a let binding statically known to be fully initialized.
No, taking a pointer is besides the point. ;) The allocation exists already before the pointer gets created. We're not going to have completely different operational semantics for let
before and after they have the pointer taken, that would be a huge pain.
this issue has allowing
let x = 0i32;
let ptr = addr_of!(x) as *mut i32;
*ptr = 42;
as its central question and i'm saying that allowing it would be as surprising as allowing
let x = 0i32;
x = 42;
and unitialized let bindings are already 'special'.
We're not going to have completely different operational semantics for let before and after they have the pointer taken, that would be a huge pain.
I'm saying that if a pointer can be taken then the weirdness with uninitialized let bindings is already over.
FTR, in lccc currently I do distinguish between locals that have and have not had their address taken (though currently not for aggregates, I need to add an insertfield
mir-expr, which is waiting on equivalent XIR). Before a local has an address taken, it's just purely held as an SSA var (which through magic, ends up as on the xlang value stack). Reassignments don't matter because they're just always new values. After the address is taken, alloca is used (which becomes a local variable in xlang), based on the Mutability of the binding in HIR. It would be nice if I could use a read-only allocation for alloca const <type>
.
A similar model could theoretically be used in Minirust to achieve the required/desired semantics.
and unitialized let bindings are already 'special'.
You're thinking in terms of surface Rust, but that's not how it works in the operational semantics. In MIR, there isn't even a difference between let x = 0;
and let x; x = 0;
.
I'm saying that if a pointer can be taken then the weirdness with uninitialized let bindings is already over.
No idea which point you're trying to make here. We need one rule for what happens at a write to a place, treating all places uniformly. We don't even know we're writing to a local variable when we do the *ptr = 42;
, we're just writing to a memory location that's part of some allocation. We could say that the allocation that backs a let
is read-only, but then even the initial write would be forbidden. Whether or not a pointer is ever taken has nothing to do with this, the write x = 0
and *ptr = 0
look exactly the same to the memory model (assuming *ptr
points to x
).
I suggest making yourself familiar with MiniRust or a similar operational model. Just jumping into a deeply technical discussion can be confusing and cause confusion when you're not familiar with the technical background.
A similar model could theoretically be used in Minirust to achieve the required/desired semantics.
Miri uses such a model to be a bit faster. But having such an optimization in the spec would IMO be a big mistake. The spec should be as clean and lean as possible. It's a bad idea to optimize the spec for performance. It risks issues like this where you accidentally change the spec.
So far there has been one proposal for an op.sem that actually makes these writes to let
-bound variables UB, and it's Stacked Borrows. This is also an aspect of Stacked Borrows that has caused a lot of raised eyebrows and confusion, as tracked in https://github.com/rust-lang/unsafe-code-guidelines/issues/257. I am convinced we want to fix https://github.com/rust-lang/unsafe-code-guidelines/issues/257, via something like what Tree Borrows does.
Absent that, the best model I can think of is to introduce a new statement in MIR that says "this variable is now initialized" and marks the corresponding memory allocation as read-only. But I don't see sufficient motivation to add such a statement, and I don't know how hard it would be to make MIR building emit such a statement.
Fair enough that including the opt by-spec is potentially a pain. Although, a simpler model is (in spec-prose)
If the operand to a raw-address expr is an immutable place expression that denotes a binding, the resulting tag is
Frozen
.
IE. addr_of!(local)
would specifically generate a Frozen
tag because local
is a an immutable place expression that denotes a binding.
Edit: Though this has one hole: Primitive slice/array indexes.
addr_of!
doesn't generate any tags though, it's a raw pointer operation. So this sounds like a rather non-compositional special case. Even Stacked Borrows, when it does the retags for "this was just cast from a ref to a raw ptr", doesn't know the place expression that computed this raw ptr -- it just gets the raw ptr and does its retag. (And that's leading to all sorts of trouble like https://github.com/rust-lang/unsafe-code-guidelines/issues/257 so I want to get rid of it.)
We're discussing the reasoning principles permitted to the compiler here, not the reasoning principles permitted to programmers.
If you canonize this:
let x = 0i32;
let ptr = addr_of!(x) as *mut i32;
*ptr = 42;
then programmers will be allowed to reason based on it. Indeed, they will be forced to.
There's a lot of things that are awkward to reason about as programmers that is well-defined.
Enum variants and UnsafeCell
(in every model under serious consideration) is a great example.
Yes, knowing the type is required, and?
Part of my complaint is that if this optimizes differently:
let x = 5;
ffi_call(&x);
let y = x + 5;
than this:
let x = 5;
ffi_call(ptr::addr_of!(x));
let y = x + 5;
then now I can't tell people they should just prefer to use ptr::addr_of!
anymore, I have to justify it based on whether the program needs that optimization.
I suggest making yourself familiar with MiniRust or a similar operational model. Just jumping into a deeply technical discussion can be confusing and cause confusion when you're not familiar with the technical background.
Rude. I'm already familiar with MiniRust. But I suppose I must have been unclear.
No idea which point you're trying to make here. We need one rule for what happens at a write to a place, treating all places uniformly. We don't even know we're writing to a local variable when we do the ptr = 42;, we're just writing to a memory location that's part of some allocation. We could say that the allocation that backs a let is read-only, but then even the initial write would be forbidden. Whether or not a pointer is ever taken has nothing to do with this, the write x = 0 and ptr = 0 look exactly the same to the memory model (assuming *ptr points to x).
Assuming:
Then the choice of operational semantic can only possibly impact a programmer writing surface rust by deciding whether an initialized let binding is read-only or read-write.
I see a couple options for lowering let bindings:
The chosen semantics would be useful for verifying the lowering of surface rust or for allowing a easier path to new language features like partial initialization.
but right now, as uninitialized let bindings are fully encapsulated by surface rust, the only impact the chosen operational semantics can have on programmers writing surface rust is the behavior of code like this:
let x = 0;
let ptr = addr_of!(x);
unsafe { *ptr = 42 };
tldr: i don't care what specific operational semantic we choose, but we can and should make writing to an initialized let binding UB detectable by our existing tooling.
@workingjubilee
Part of my complaint is that if this optimizes differently:
This will, in general, almost surely optimize differently. addr_of!
/addr_of_mut!
are markers that tell the compiler "rampant aliasing ahead, optimize carefully". That's one of their core features.
For the concrete examples, I think it is good that they optimize differently. References make subtle promises people should think about carefully; raw pointers should avoid making such promises.
then programmers will be allowed to reason based on it. Indeed, they will be forced to.
Sure if they write complicated code they have to reason about it. That's on them. If we make the code UB they still have to reason about their code enough to realize it's UB. I don't see how this affects a library author. You seem to have some concrete example in your mind that involves you providing a macro and someone else using it it the wrong way, or maybe I entirely misunderstood because I can't read your mind.
@Calvin304
Rude. I'm already familiar with MiniRust. But I suppose I must have been unclear.
I don't think it is rude to politely point out when in a technical discussion, someone seems to lack the required technical background and risks derailing the discussion. The Zulip stream is a good place to ask questions.
Your comments were stating facts about surface Rust, without making a clear argument as part of the discussion at hand. I took that to indicate a lack of understanding of the deeper question we are discussing. Maye I misjudged, in which case I apologize.
Assuming:
You're still harping on what I consider superficial syntactic coincidences of current Rust: that one can't do addr_of!
of a variable that is declared but not yet initialized, and that one can't field-by-field initialize a tuple/struct. I think it would be a mistake to design an opsem that enshrines these limitations. Neither of them is fundamental, neither of them exists in MIR, I could imagine both of them being lifted in the future.
Introduce write-once allocations. Write after initialize is UB with exact tracking in miri and MiniRust. This also might be useful for optimizing OnceCell/Lazy type things.
I mentioned this possibility upthread and discarded it because it prevents future Rust features that we don't want to categorically rule out yet.
Emit some sort of special initialize/make-read-only operation directly after the initializing write. Write after initialize is UB with exact tracking in miri and MiniRust.
I mentioned this. It's the most plausible way to do this IMO, but requires MIR building to figure out when a variable is fully initialized which seems non-trivial.
Emit some sort of special initialize operation as the write. Write after initialize is UB with exact tracking in miri and MiniRust.
That precludes the same future options as your first item.
tldr: i don't care what specific operational semantic we choose, but we can and should make writing to an initialized let binding UB detectable by our existing tooling.
I understand you want that, but what's the argument for why? You're saying we should explode program's in people's faces (aka make them UB) and you've now provided some mechanisms for how to do so (thanks for that), but you haven't explained why you think it's better for these programs to have the worst kind of bug a program can have rather than being "discouraged but well-defined unsafe code crimes".
You're still harping on what I consider superficial syntactic coincidences of current Rust: that one can't do addr_of! of a variable that is declared but not yet initialized, and that one can't field-by-field initialize a tuple/struct. I think it would be a mistake to design an opsem that enshrines these limitations. Neither of them is fundamental, neither of them exists in MIR, I could imagine both of them being lifted in the future.
I am arguing that actually choosing a specific operational semantic for uninitialized let bindings doesn't have to happen until we have a way (or even just a proposal) to write surface rust that can 'care' about the specific operational semantic. Even without a specific operational semantic we can already guarantee for unsafe code authors that initialized let bindings are read-only.
I understand you want that, but what's the argument for why? You're saying we should explode program's in people's faces (aka make them UB) and you've now provided some mechanisms for how to do so (thanks for that), but you haven't explained why you think it's better for these programs to have the worst kind of bug a program can have rather than being "discouraged but well-defined unsafe code crimes".
Saying let bindings are read-only allows the implementation to choose what to do if writes occur, here are some options for the implementation:
One of the nice things about let bindings is that they are local. This means that for you to take a pointer to a local directly (which is required to avoid &'s own read-only guarantee) you have to do so in the same function with the name of the local directly. As such any unsafe code authors would have a very easy switch to let mut
and addr_of_mut
if they want read-write semantics.
Imagine you are an unsafe code author you run the following excerpt in miri
fn main() {
let x = 0i32;
unsafe { call_library_function(addr_of!(x)); }
if x != 0 {
unsafe { unreachable_unchecked() }
}
}
unsafe fn call_library_function(p: *const i32) {
// potentially arbitrarily far down the call stack
*(p as *mut i32) = 1;
}
with let binding read-write semantics you would see miri error at the unreachable_unchecked
and then have to spend potentially some time trying to find the unexpected write. But with let binding read-only semantics you would see miri error at the write *(p as *mut i32) = 1;
. At this point the unsafe code author would either add let mut
and addr_of_mut
if they want read-write semantics or they would remove the pointer write.
Emit some sort of special initialize/make-read-only operation directly after the initializing write. Write after initialize is UB with exact tracking in miri and MiniRust.
I mentioned this. It's the most plausible way to do this IMO, but requires MIR building to figure out when a variable is fully initialized which seems non-trivial.
Currently this tracking is already done in order for uninitialized let bindings to be safe in surface rust. And as far as I can tell this tracking is done lexically, not in MIR. This tracking is mentioned in the rust reference.
To echo Calvin and Jubille, I do not think it is worth the confusion to allow mutating initialized let
bindings (via pointer or any other way aside from interior mutability). If one wants a local variable to be mutable, they can just declare it using let mut
(and take its address using addr_of_mut!
, (but I have less of an opinion on whether addr_of!
should deny writes in general in cases where addr_of_mut!
would not)).
(Slightly offtopic from the original question, but relevant to later discussion) I do not think that it is a "syntactic coincidence" that one cannot get a pointer to a partially initiailzed local in Rust; in safe code Rust guarantees that let
bindings are written to at most once (and only lets you use then after they have been initialized), and I see no reason that unsafe code should deviate from that. If one needs to both have a partially initialized value and take a pointer to it, IMO let mut
and MaybeUninit
should be used, to explicitly tell the compiler "I am doing the initialization tracking for this". (I might compare this to how unsafe code still most uphold (most of) the invariants of &
and &mut
, and should instead use raw pointers to tell the compiler "I am doing the aliasing tracking for this".)
Confusion of users isn't a justification for making things UB (on the contrary, it's typically a justification for making things DB). The justification is optimization or ease of implementation. A user being confused about a snippet of code doesn't necessitate giving every rust implementation unlimited license to do whatever it wants with that snippet. We don't need to teach that it's a good idea or even a correct operation to say "Yes, it's DB, compiler vendors no you don't get to delete code".
And certainly if there's no good way to specify that given code has undefined behaviour, the spec (neither minirust nor the prose version in rust-lang/spec) shouldn't be complicated because it's "hard to teach" that some code is considered valid by the abstract machine (even if we as programmers might not consider it "valid").
I do not think that it is a "syntactic coincidence" that one cannot get a pointer to a partially initiailzed local in Rust
I'm actually unsure this is true even now - you can get a pointer to the initialized field. Under TB provenance rules, I'd expect you can access the whole allocation with that pointer. Though, in that case, I'd expect the resulting pointer to behave as-if we took the whole pointer (so if the rules are that the entire object is immutable via pointer, then I'd expect the same here even for a partially-initialized object).
addr_of!
doesn't generate any tags though, it's a raw pointer operation.
It has to get the base pointer from somewhere (unless Minirust locals really are just SSA Vars carrying an alloca
pointer like lccc-MIR after the first address is taken).
It has to get the base pointer from somewhere (unless Minirust locals really are just SSA Vars carrying an alloca pointer like lccc-MIR after the first address is taken).
Quoting MiniRust:
/// For each live local, the location in memory where its value is stored.
locals: Map<LocalName, Pointer<M::Provenance>>,
So yes, that's basically what they are -- except this isn't SSA, and there's no "after first address taken". On StorageLive, we reserve memory (and generate a tag) and store the pointer in locals
; on StorageDead, we free the memory and remove the entry in locals
. Very simple, and nicely modular -- each operation just does one thing, and there's just a single possible representation for a live local (not an "optimized" form and an "address was taken" form, as in Miri and apparently in your language).
Even without a specific operational semantic we can already guarantee for unsafe code authors that initialized let bindings are read-only.
This is backwards. We are not guaranteeing anything for unsafe code authors. We are asking unsafe code authors to prove something to us, on penalty of nasal demons if they get it wrong!
Imagine you are an unsafe code author you run the following excerpt in miri
Imagine you are an unsafe code author that cannot run their code in Miri. You just added Heisenbugs to their code!
Currently this tracking is already done in order for uninitialized let bindings to be safe in surface rust. And as far as I can tell this tracking is done lexically, not in MIR.
I was specifically talking about having to track this in MIR now, so this is besides the point.
I am arguing that actually choosing a specific operational semantic for uninitialized let bindings doesn't have to happen until we have a way (or even just a proposal) to write surface rust that can 'care' about the specific operational semantic.
That is a bad idea, for extensions we can already see coming (like initializing a tuple field for field) to just close our eyes and pretend this will never happen. We're designing an opsem for the future, not the past. It is bad engineering to ignore likely future usage scenarios.
To echo Calvin and Jubille, I do not think it is worth the confusion to allow mutating initialized let bindings (via pointer or any other way aside from interior mutability).
But you think it is worth the confusion of Heisenbugs and miscompilations to achieve this?
To be abundantly clear, there is no program that becomes easier to reason about with more UB! You are just adding more proof obligstions to the pile of things people have to worry about before basic things like debugging work reliably.
If you are passing a raw const ptr to someone else's code, and that code has a bug and does a write when it should not, this is still easier to debug (outside Miri) when there is less UB, not more!
in safe code Rust guarantees that let bindings are written to at most once
You're asking to add nontrivial complications to the spec that underpins Rust (write-once memory would have to be added in the memory model, already a quite complicated part of Rust) which will cost many people a cumulative amount of countless hours to read and understand. And what you get out of that is largely more head scratching when people have their code miscompiled. I call that a lose-lose situation.
I can sometimes see the appeal of more UB for more "structure", but certainly not when it requires adding entirely new ghost state to the language, like per byte tracking of mutability (to support initializing one field and then another)!
I really don't understand why people want to go out of their way and make the spec quite a bit more complicated just to inflict the pain of more UB on our fellow Rustaceans. This should have an extremely high bar to pass.
If the goal is to make Miri flag more things, then we should be talking about "erroneous behavior" (that's what C++ calls it), not Undefined or Defined behavior. Erroneous behavior means: when this happens it's a bug, and then either execution stops or it continues just fine, but no extra assumptions or optimizations can be made by the compiler. If the program doesn't abort, it continues in an entirely well-defined way.
You are making some valid arguments for why writing to non-interior-mutable let
should be erroneous behavior. I am still not sure I agree with carrying all that extra state to implement write-once semantics, but that would explain why you are focusing so much on Miri checking this and so little on unsafe code authors having to prove it.
You seem to have some concrete example in your mind that involves you providing a macro and someone else using it it the wrong way, or maybe I entirely misunderstood because I can't read your mind.
I am treating the author of the macro as potentially adversarial to the author of the code.
This can remain true even if they are the same person.
I find that it is often the case that I must adopt this quasi-adversarial stance in order to understand how to rectify questionably-written unsafe
code until it is sound again. So I make my request knowing what can make it harder and easier for me to audit and rectify a gnarled and twisted codebase full of mazy indirections, obscuring code with poorly-documented preconditions, postconditions, and invariants. And macros offer a perfect ad-hoc solution for authors that want to provide an interface but don't want to reason quite so much about types, because they can "simply" parse the input and do codegen.
I can, without a doubt, guarantee you that there is more than one way to introduce a Heisenbug.
@workingjubilee I can just repeat asking you for a fully concrete example. I asked before, but you're still just talking in generalities that are impossible to confirm or refute. I can't look into your head to see what you think would be harder to easier to audit. Until I am given a counterexample, I maintain my claim that all else being equal, adding more UB can never make anything easier to audit. (I think that's a theorem: if P → Q
, then proving Q can never be easier than proving P. P is "program well-defined with less UB" and Q is "program well-defined with more UB". Things get less monotonic when considering libraries, not whole programs, but then we're also considering safety invariants, not validity invariants.)
I provided one, did I not?
let x = 5;
let stuff = construct!(x);
stuff.do_something();
assert_ne!(x, 5);
println!("if this code is reached, modular reasoning got axed.");
The internal implementation of construct!
and do_something
are unimportant because they are arbitrarily non-trivial: as many thousands of lines long as necessary to make you flinch at the thought of having to sift an invocation of ptr::addr_of!
from. Because they then do FFI with a million-line codebase.
Oh, and yes, they're libraries that produce libraries that do IPC.
That get dlopened. Sorry, almost forgot.
That get dlopened. Sorry, almost forgot.
You can stop the needless hyperbole.
I am dead serious.
Me too. Except I guess I won't write my reply now as this isn't useful.
How is it hyperbole if it's real?
There's no technical discussion in which trying to construct the most complicated hypothetical scenario is constructive.
@RalfJung "proc macros with mazy implementations that then expand into mazy code that are used as part of a library that is used to implement libraries that that get dlopened and do IPC over shmem" is an actual, no-shit, stone-faced description of what I work on.
I guess I can't help myself so I'll write a reply before likely unsubscribing to protect my sanity. Someone please ping me when things cooled down.
It's not entirely clear to me who's reasoning about what in your example. That's what I was asking you to explain. Making it an actual self-contained example that people not in your brain can understand. I was not asking you to explain how to conjure Cthulu, even if conjuring Cthulu is your dayjob.
I guess it's about construct!
or do_something
having a bug and mutating things when they shouldn't? You know, you could have just said that: where's a potential bug, who's doing what to find it. (You still haven't said the latter part.)
Let's see how this fares
let
mutating being UBThe program has well-defined semantics so you can actually use a debugger to find this. Set a memory watcher on the variable to see where it gets mutated, voila. You still have to sift through all the abstractions but you have a starting point.
let
mutation being UBYour assert_ne
can still succeed! It can't succeed in any UB-free execution, but that means nothing here. You don't have a magic oracle that tells you whether there is UB. (You can't use Miri because as you said yourself, FFI is involved.) Nothing gets any easier in this world.
The code is still buggy, but now you additionally have to worry about the compiler miscompiling things so sometimes code may seem to work when it should not. You now have 2 problems: a buggy codebase, and a compiler that works against you.
It is orders of magnitude simpler to debug a "normal" bug than a UB bug. So for the sake of those that try to conjure Cthulu (and everyone else), we should have less UB, not more.
These are points I already made above.
Attempting to get this conversation off of the above discussion, I'm going to remake my points actually in favour of having UB here, though more concise and specific:
alloca <mutability>
(where mutability corresponds to the HIR Var declaration's mutability), and I'd like to lower alloca const
to an immutable local variable in xlang, There should logically be a way to make "immutable after initialization" work in a reasonable way - I'd expect it would be simpler to modify addr_of
's minirust lowering rather than the variable declaration itself.
This came up in https://github.com/rust-lang/rust/issues/111502 and overlaps with https://github.com/rust-lang/unsafe-code-guidelines/issues/257: should it be allowed to mutate
let
-bound variables, via code like this?Tree Borrows currently accepts this code since
addr_of
never generates a new tag, it just creates an alias to the original pointer. Stacked Borrows rejects it due to #257. There are other ways to reject this code while making*const
and*mut
equivalent, hence I opened this as a separate issue.Personally I favor the Tree Borrows behavior here and don't think it is worth the extra effort to make these variables read-only.