rust-lang / rust

Empowering everyone to build reliable and efficient software.
https://www.rust-lang.org
Other
98.29k stars 12.72k forks source link

Constants can contain references that are not Sync #49206

Open jDomantas opened 6 years ago

jDomantas commented 6 years ago

This compiles (playground):

#![feature(negative_impls)]

#[derive(Debug)]
struct Foo {
    value: u32,
}

impl !std::marker::Sync for Foo {}

fn inspect() {
    let foo: &'static Foo = &Foo { value: 1 };
    println!(
        "I am in thread {:?}, address: {:p}",
        std::thread::current().id(),
        foo as *const Foo,
    );
}

fn main() {
    let handle = std::thread::spawn(inspect);
    inspect();
    handle.join().unwrap();
}

And prints this (addresses vary, but always the same):

I am in thread ThreadId(0), address: 0x5590409f7adc
I am in thread ThreadId(1), address: 0x5590409f7adc

Which shows that two threads get the same 'static reference to non-Sync struct.

The problem is that promotion to static does not check if the type is Sync, while doing it manually does (this does not compile):

static PROMOTED: Foo = Foo { value: 1 };
let foo = &PROMOTED;

Reading the RFC, it seems that the relevant condition is about containing UnsafeCell, but does not account for other !Sync types, which are also not supposed to be shared across treads.

hanna-kruppe commented 6 years ago

cc @eddyb

eddyb commented 6 years ago

If you can make the constant, and it's immutable, I'm not sure what you can do with it that's unsound - only unsafe code relying on it not being possible for one reason or another.

Rvalue promotion is like const with a reference in it, not like static. Actually, static itself could be relaxed too when interior mutability is not involved.

I'm tempted to close as "working as intended", unless there is more code to look at, which gets broken in one way or another by this capability.

eddyb commented 6 years ago

cc @nikomatsakis

hanna-kruppe commented 6 years ago

It seems in pretty clear violation of the Sync contract. While unsoundness seems unlikely in practice (you'd need a set up where the non-Sync values are used as "tokens" for something thread-local or thread-unsafe), at least in theory unsafe code should really be allowed to rely on non-Sync data being truly not shared across threads.

Edit: Hmmm come to think of it, this is complicated by the fact that Sync is an "at any given time thing", since Mutex and the like can make things Sync. Still seems bad to muddy the waters here.

cc @RalfJung

nikomatsakis commented 6 years ago

For better or worse (worse, I guess), this also builds:

#![feature(optin_builtin_traits)]

#[derive(Debug)]
struct Foo {
    value: u32,
}

impl !std::marker::Sync for Foo {}

const F: &'static Foo = &Foo { value: 1 };

fn main() {
}
RalfJung commented 6 years ago

Yeah I think this is a problem.

I imagine some (rather silly) pseudo-code like this, based on @rkruppe's idea of using Foo as a "token":

struct Event {
  address: usize,
  kind: EventKind
}

enum EventKind { Acquire, Release }

// A global event registry thing
static mut EVENT_LIST : Mutex<Vec<Event>> = Mutex::new(Vec::new());

fn add_event(e: Event) {
  unsafe { EVENT_LIST.lock().push(e) }
}

fn get_events() -> Vec<Event> {
  unsafe { EVENT_LIST.lock().clone() }
}

// Now comes the bad code
struct Foo(u32);

impl !Sync for Foo {}

impl Foo {
  fn acquire_release(&self) {
    add_event(Event { address: self as *const _ as usize, EventKind::Acquire });
    add_event(Event { address: self as *const _ as usize, EventKind::Release });
  }
}

const F: &'static Foo = &Foo(0);

fn main() {
  std::thread::spawn(|| F.acquire_release());
  F.acquire_release();
}

Playground version

Given that Foo is non-Sync, which guarantees that all pointers to the same &Foo live in the same thread, I think unsafe code should be allowed to rely (for UB) on the fact that the event list will never, ever contain something like (a, Acquire) (a, Acquire) (a, Release) (a, Release) for any address a. However, the code above could actually have exactly that outcome, with a being the value of F.

EDIT: See here for further elaboration.

eddyb commented 6 years ago

Given that Foo is non-Sync, which guarantees that all pointers to the same &Foo live in the same thread

Do we guarantee this anywhere? My expectation is that the fields of such an abstraction must be private, and that means you can only create the problematic global references in the same module. And you might want those references, for a small set of special values that are valid cross-thread.

As another example, there are non-Send types in the compiler with special const values that happen to usable everywhere, regardless of thread-local state (e.g. Span's DUMMY_SP)

RalfJung commented 6 years ago

Do we guarantee this anywhere? My expectation is that the fields of such an abstraction must be private, and that means you can only create the problematic global references in the same module.

I think my Foo above is a counterexample. The only field does not matter at all, it can be public. It could also be entirely removed, just making it a unit struct.

As another example, there are non-Send types in the compiler with special const values that happen to usable everywhere, regardless of thread-local state (e.g. Span's DUMMY_SP)

That's a different thing. Even types that are not in general Send can have elements that are Send -- just like Vec has an element that is Copy.

RalfJung commented 6 years ago

So the conclusion from the unsafe code guidelines meeting yesterday (as far as I understood) is that we want to disallow this. Rvalue promotion should be thought of as implicitly generating additional statics, but with loser guarantees about address uniqueness. All restrictions of static apply, including the type having to be Sync. The open question is if we can do this breaking change, it is not clear how much code relies on this. There will be a crater run.

arielb1 commented 6 years ago

With my current understanding of the semantics, there's nothing wrong with several different zero-sized values having the same address.

#![feature(optin_builtin_traits)]

struct NotSync;
impl !Sync for NotSync {}

fn main() {
    let mut u = NotSync;
    let mut v = NotSync;
    println!("{:?}", &mut u as *mut NotSync);
    println!("{:?}", &mut v as *mut NotSync);
}
hanna-kruppe commented 6 years ago

I think that's true, but this issue applies to non-ZSTs as well.

arielb1 commented 6 years ago

@rkruppe

Sure. Missed that.

RalfJung commented 6 years ago

Right, this is not at all about addresses. Even ZSTs may not be duplicated as that may violate uniqueness guarantees, also see https://github.com/nikomatsakis/rust-memory-model/issues/44. I see sending ZSTs across thread boundaries as similar.

RalfJung commented 6 years ago

Yeah so this is still open... how hard would it be to fix? @oli-obk @eddyb

eddyb commented 6 years ago

(unrelatedly, I made a comment somewhere else: https://github.com/rust-lang/rust/pull/53851#issuecomment-422915846)

@RalfJung You would probably copy is_freeze and is_freeze_raw, to make is_sync, then use that the same way is_freeze is used, during promotion (both in rustc_passes and rustc_mir).

Note that it's slightly harder than the check for statics because we report that as an error, while for promotion, we need to make decisions (whether to promote or not) off of T: Sync, without erroring.

RalfJung commented 6 years ago

@eddyb Trying that now, let's see if that works.

RalfJung commented 6 years ago

@eddyb not sure what is going on, but it doesn't work... see https://github.com/RalfJung/rust/commit/623fa51d6f6a60d90b651b5f23489d7c58ea27e4. The newly added test fails to fail to compile (as in, it compiles). How can that be?

eddyb commented 6 years ago

@RalfJung The codepaths you changed are for unknown values of those types. You also need to handle ADT constructors, which is_freeze / needs_drop don't touch right now directly.

RalfJung commented 6 years ago

Actually promotion is far from the only problem as @eddyb pointed out...

struct Bar(i32);
impl !Sync for Bar {}

const C: &'static Bar = &Bar(40);

fn main() {}

We can have consts with non-Sync references that are shared with all threads. Ouch.

eddyb commented 6 years ago

Yeah, I'm surprised we didn't bring that up, since it's how the RFC defines rvalue promotion.

I believe the only reason static ever required Sync was interior mutability - we likely would've made it Sync | Freeze had we thought about it more.

That is, we were worried about synchronizing mutation, not address equality (which we treated as generally unguaranteed).

RalfJung commented 6 years ago

I think Sync is exactly the right choice for static and I am happy that's what we got. Logically speaking, a static is some piece of state that every thread can get a shared ref to any time. Sync is exactly the property saying "sharing those shared refs across threads is fine". Freeze does not imply Sync. Not once you accept that people can have arbitrary invariants on their types that we do not know anything about.

The reason promotion also cares about Freeze is because of the observable effects of merging different allocations into one. That is an altogether entirely different concern than synchronization across threads. It is a purely operational concern, and Freeze is a very operational statement (about the underlying representation of the type being impossible to change behind a shared ref).

eddyb commented 6 years ago

It has nothing to do with merging, and everything to do with not changing runtime semantics. You can only be sure the data isn't written to if you have &T where T: Freeze.

RalfJung commented 6 years ago

I think you are saying the same thing as me. Runtime semantics get changed when two allocations that start having the same content get merged (deduplicated) and one gets written to.

RalfJung commented 6 years ago

I have a proposed fix at https://github.com/rust-lang/rust/pull/54424

eddyb commented 6 years ago

@RalfJung I would not call merging the fact that a runtime function sees the same global allocation twice, instead of stack memory. Content-based deduplication is an implementation detail irrelevant here

RalfJung commented 6 years ago

Content-based deduplication is an implementation detail irrelevant here

That statement is true only because we are ensuring Freeze.

eddyb commented 6 years ago

@RalfJung This is what I mean, and why we based this analysis on Freeze:

fn foo() -> usize {
    let counter = &AtomicUsize::new(0);
    counter.fetch_add(1, Ordering::SeqCst)
}

There's only one global allocation and deduplication is irrelvant. Also, it's Sync. However, we still can't promote it, because that would change foo's runtime behavior.

RalfJung commented 6 years ago

@eddyb I am confused. Promoting it would change foo's runtime behavior because we would be deduplicating the allocations. Every call to foo creates a new AtomicUsize, and merging all their backing stores is observable because they are not Freeze.

Why are you saying "there's only one global allocation"? There would be only one global allocation if we would promote, but without promotion there is one allocation per call. Is there not?

I agree we need Freeze, because of this deduplication. We also need Sync because we are sharing an instance across threads.

eddyb commented 6 years ago

Ahh, you're not using "deduplication" in the sense of content-based compile-time optimization, but rather at a more abstract semantic level.

Because the promotion spec doesn't have to guarantee whether two &123 (written in different places) get the same address (and even the same promoted could get multiple addresses, based on where it's instantiated).

RalfJung commented 6 years ago

Ahh, you're not using "deduplication" in the sense of content-based compile-time optimization, but rather at a more abstract semantic level.

I should have been more clear. I am talking about the effect of promotion itself, which changes "a new allocation per invocation" to "always re-use the same allocation", which is a form of deduplication -- and for Freeze data, this is only operationally observable by comparing addresses.

That is however an orthogonal concern to the fact that the same "thing" is shared across threads, which should only be done for Sync data.

RalfJung commented 4 years ago

Another example of non-Send consts (i.e., consts with a reference to something non-Sync) in the wild: https://github.com/rust-lang/rust/issues/67601

SimonSapin commented 4 years ago

For what it’s worth I don’t think there is a thread-safety issue in #67601. Everything in those constants is immutable. !Sync raw pointers are only used because they are expected by FFI.

RalfJung commented 4 years ago

Yeah, raw pointers really shouldn't be a problem here. I mean, they don't have meaningful invariants anyway, so making them Sync would not on its own break anything.

Unfortunately, it's hard to tell that apart from types that are !Sync for better reasons.

SimonSapin commented 4 years ago

!Sync is problematic when the same "instance" (same memory address) is reused, which only happens when a constant is promoted to static memory, right? Would it be possible to disable promotion of !Sync values? I would expect unpromoted const items to emit similar code to argumentless macros, is that accurate?

SimonSapin commented 4 years ago

Oh. I’ve read this issue’s description more closely and realized this is not about const items but about references, and that unpromoting would reduce their lifetime from 'static to the stack frame which might break some programs. Never mind my previous comment.

oli-obk commented 4 years ago

Yeah, raw pointers really shouldn't be a problem here. I mean, they don't have meaningful invariants anyway, so making them Sync would not on its own break anything.

I mean the reason they are !Sync is because it's easier/safer to default structs containing raw pointers to !Sync than have users accidentally build types that are Sync but should not be.

!Sync raw pointers are only used because they are expected by FFI.

Couldn't you still use references as they are #[repr(C)] and have the same representation as the raw pointers? Or Option<&T> where null is possible.

which only happens when a constant is promoted to static memory, right?

Yes, but you have this everywhere in your code. const FOO: &i32 = &1; already promotes the 1, and that is necessary, because otherwise your &i32 would be dangling. So if you do const FOO: & *const i32 = &(&1 as *const _); we need to promote the &1 as *const _, because otherwise you get a lifetime error.

I would expect unpromoted const items to emit similar code to argumentless macros, is that accurate?

No. A constant produces a set of bytes and relocations (basically additional information about a set of bytes saying this set of bytes is a pointer) and all that happens when you use this constant is that the bytes of the constant are copied to your local variable. All the things that the constant points to must already have an anonymous static crated for the so we can point at them.

A constant isn't built at runtime.

SimonSapin commented 4 years ago

Couldn't you still use references as they are #[repr(C)] and have the same representation as the raw pointers? Or Option<&T> where null is possible.

Not easily, there are a number of structs involved that are generated by rust-bindgen. I think unsafe impl Sync for JSJitInfo would be valid, though.

oli-obk commented 4 years ago

Not easily, there are a number of structs involved that are generated by rust-bindgen.

:see_no_evil: oh... and it's impossible for bindgen to tell the difference between pointers that own their pointee and pointers that just reference them. :(

I think unsafe impl Sync for JSJitInfo would be valid, though.

Oh yea, I was just hoping to get rid of the raw pointers entirely.

Dylan-DPC-zz commented 4 years ago

Marked p-medium as per our prioritisation discussion

taiki-e commented 2 years ago

This is currently marked requires-nightly, but can be reproduced without nightly: playground

#[derive(Debug)]
struct Foo {
    value: u32,
}

// stable negative impl trick from https://crates.io/crates/negative-impl
// see https://github.com/taiki-e/pin-project/issues/102#issuecomment-540472282 for details.
struct Wrapper<'a, T>(::std::marker::PhantomData<&'a ()>, T);
unsafe impl<T> Sync for Wrapper<'_, T> where T: Sync {}
unsafe impl<'a> std::marker::Sync for Foo where Wrapper<'a, *const ()>: Sync {}
fn _assert_sync<T: Sync>() {}

fn inspect() {
    let foo: &'static Foo = &Foo { value: 1 };
    println!(
        "I am in thread {:?}, address: {:p}",
        std::thread::current().id(),
        foo as *const Foo,
    );
}

fn main() {
    // _assert_sync::<Foo>(); // uncomment this causes compile error "`*const ()` cannot be shared between threads safely"

    let handle = std::thread::spawn(inspect);
    inspect();
    handle.join().unwrap();
}

@rustbot label: -requires-nightly

bstrie commented 1 year ago

Now that it's been revealed that this is possible (if roundabout) on stable, should the priority on this be increased? What would it take to at least begin emitting warnings here, to prevent new code from taking advantage of this and making it even more painful to fix in the future?

tgross35 commented 1 year ago

Is this something that could be tied to editions? If so, a timeline like this might be reasonable:

  1. warn-by-default asap
  2. deny-by-default 2023Q3/Q4
  3. forbid with 2024 edition

The latest crater run from Ralf's PR #54424 indicated 18 regressions, but that's 4 year out of date at this point so occurrences could have gone either way.

RalfJung commented 1 year ago

It's not even that roundabout, see https://github.com/rust-lang/rust/issues/67601

pub const FOO: &'static *const u32 = &(&BAR as _);
pub const BAR: u32 = 0;

This is part of the broad set of questions around observing pointer identity of consts. At least so far the only way I know of to observe this issue is to compare addresses. Basically we should have a lot of instances of this !Sync (but Freeze) type, one for each use, but they all get deduplicated. Maybe we just have to find a way to rationalize this... I don't have a lot of confidence that we can land a fix here.

Making it a hard error only on sone editions does not really help: when I write a !Sync type I still have to consider users on old editions, and have to ensure my type is sound for them.

Is there a known example in the wild of a crate that makes pointer identity arguments for !Sync types, a type that is broken by this issue?

RalfJung commented 10 months ago

Another approach to solving this problem would be trying to rationalize it -- what are the proof obligations for unsafe code authors to ensure that this rustc behavior does not cause unsoundness? If we change my example here to cause actual UB from these shared !Sync values, where does the soundness argument go wrong?

I feel like there are two ingredients to this:

Basically: !Sync is not defined as "cannot ever have shared references to the same object in different threads". It is not defined at all; Sync is defined as "shared safety invariant is thread-independent". !Sync just means this property doesn't hold, but on its own it grants no extra powers. It's the adjusted (thread-dependent) shared safety invariant that grants powers! But those powers come with responsibilities and doing negative reasoning doesn't work, you still have to constructively justify everything you do. If you do that I don't think you can construct an unsoundness from my example.

I think I am less worried about this issue now and I am not even sure any more if it is a soundness issue. However, I am worried how it might interact with CTFE heap allocations.

LegionMammal978 commented 10 months ago

The way I like to think about it is that "all pointers to the same &T where T: !Sync live in the same thread" has never been a universal invariant. Otherwise

pub struct Wrapper(pub *const i32);
unsafe impl Sync for Wrapper {}

would be unsound, since Wrapper could be used to get the same &*const i32 in different threads, even though *const i32: !Sync.


Instead, the actual invariant is, "once a &T is created on one thread, no one can safely move it to another thread using language features". If an interface controls a particular !Sync object that should only be accessible from one thread, then it's additionally responsible for making sure that:

  1. The interface exposes no operation allowing users to safely access the object directly, except from the correct thread. For instance, it doesn't publicly expose a SyncWrapper(pub Object).
  2. The interface exposes no (internally unsafe) operation allowing users to safely copy an &Object to any other thread. For instance, it doesn't publicly expose a SendWrapper(pub &Object).
  3. Nothing inside the interface accesses the object directly on another thread.
  4. Nothing inside the interface unsafely copies an &Object to another thread.

Conversely, if any of these responsibilities are not upheld, then there's no automatic UB or unsoundness: it just means that the object can no longer be treated as only being accessible from one thread. For instance, if you receive a &*const i32 from somewhere, then clearly it might be shared between multiple threads, in the absence of any other contextual protection.


The big question is, if a span of code is able to safely construct an object using language features, then should it be able to safely directly access it from multiple threads? If so, then an interface that wants to uphold responsibility 1 cannot allow its objects to be publicly constructed. If not, then such an interface can allow its objects to be publicly constructed. The importance of this issue is that this is the only language 'feature' that would allow safe direct access to the same object from multiple threads, without going through references. (The confusion here is that this is more about responsibility 1 than responsibility 2. We aren't really copying around references here, so much as creating multiple references directly, given access to the same underlying object from multiple threads.)

In @RalfJung's example, I don't think it's necessarily true that "that can only be a sound API if Foo's field is private". Suppose that Foo's field is public, and this issue is fixed. Then, even if outside users create as many Foo objects as they want, they will all have different addresses, so acquire_release() will always form valid pairs in the event list, relative to a single address. It's only if this is not considered an issue, and if it's considered safe to directly access a constructed object from multiple threads, that Foo's field must be private for the API to be sound. So the justification given seems to be begging the question a bit, in my view.

LegionMammal978 commented 10 months ago

Also, I'd like to note that there's a big class of Freeze + !Sync types that don't involve external lists, and those are "remote Cells" that control access to an object at another address. To give a trivial example, one could write a Freeze CellWrapper type, such that a CellWrapper<'a, T> can be safely constructed from an &'a mut T, implementing the same operations as an &'a Cell<T>:

#![feature(negative_impls)]
use std::{mem, ptr::addr_of};

pub struct CellWrapper<'a, T>(pub &'a mut T);
impl<T> !Sync for CellWrapper<'_, T> {}

impl<T> CellWrapper<'_, T> {
    pub fn set(&self, val: T) {
        drop({
            // SAFETY: Since `Self: !Sync` and we don't unsafely share `self`,
            // `self.0` is unique for the duration of this block.
            let r: &mut T = unsafe { addr_of!(self.0).read() };
            mem::replace(r, val)
        });
    }

    // And so on for the rest of the `Cell<T>` API.
}

Clearly, allowing shared references to the same CellWrapper<'a, T> on multiple threads would be disastrous. But it's also clear that making its field public is completely harmless on its own accord, since &mut T already provides the necessary guarantees. It's only if we infer that public construction implies public cross-thread access that we end up with a problem.

RalfJung commented 10 months ago

In @RalfJung's example, I don't think it's necessarily true that "that can only be a sound API if Foo's field is private".

Oh you are right, somehow I thought the value of the field was relevant. But the field only exists to make this type non-zero-sized. (Here's an actually compiling version of the example.)

So... the safety invariant for shared Foo would be something like: the address that this Foo lives at is associated with the thread that holds the shared reference (the thread exclusively owns the right to add events with that address to the event stream), and right now, in the event stream, this address is not acquired (i.e., the even stream contains the same number of acquire and release, and any prefix of the stream contains either one more acquire than release or the same number of acquire and release).

acquire_release is safe since it runs in the thread that "owns" the address of the Foo, so it can rely on that thread's ownership of this address for event purposes for the entire duration of the call. Temporarily bringing the acquire and release count out of sync is hence fine.

The axiom that from &mut Foo we can get &Foo is satisfied, since for that transition we can take the exclusive ownership of the mutable reference to trade it against the current-thread ownership of the address in the event stream. (So we have some event stream invariant which is: all addresses are either unclaimed, or they are currently claimed by some thread and we own the memory at that address. When we have an &mut Foo, we know it has to be unclaimed as otherwise the address would be owned twice which is impossible. [This is all fairly standard separation logic reasoning, I apologize for just dumping that here.]) This is all actually more complicated since there are lifetimes involved, but I think it should work. (Really, addresses are either unclaimed or "claimed by some thread for some lifetime".)

Okay then, what happens when the &'static Foo is created? Const-eval creates a Foo somewhere and then needs to use the "&mut Foo to &Foo" axiom to justify creating the shared reference... but that axiom is always executed in some thread! Const-eval doesn't run in a thread, or if it did then it'd have to ensure that the final value can only be used from that thread. Really we should have required that the value of a const is Send, justifying that the one value we compute can be "sent" to all threads.

Absent that, we need the concept of whether a value is valid in "no thread", i.e. the safety invariant is predicated not on a ThreadId but an Option<ThreadId> where None indicates the compile-time world where no threads exist yet. The "&mut Foo to &Foo" then also has to hold for the None thread, which fails. However, there is a special magic exception where if a type is !Freeze then the "&mut T to &T" axiom doesn't have to hold for the None thread, and that's how all the interior mutable types work.

Ugh, this is not very satisfying. :/

LegionMammal978 commented 10 months ago

Oh you are right, somehow I thought the value of the field was relevant. But the field only exists to make this type non-zero-sized.

Ah, I thought you were referring to the rules for infallible promotion. If a type has public fields, then an external user can create a static-promoted instance of it. But if a type has private fields and can only be created with a pub const fn call, then an external user cannot create a static-promoted instance of it, and this bug does not apply.

Okay then, what happens when the &'static Foo is created? Const-eval creates a Foo somewhere and then needs to use the "&mut Foo to &Foo" axiom to justify creating the shared reference... but that axiom is always executed in some thread!

I don't see how &mut Foo&Foo is relevant here? There's no &mut reference created in this example. Instead, this seems closer to &binding Foo&Foo. That is, const-eval is translating const F: &'static Foo = &Foo(0); into the moral equivalent of:

const F: &'static Foo = {
    static FOO: Foo = Foo(0); // doesn't actually compile, since `Foo: !Sync`
    &FOO // also doesn't actually compile, since consts can't refer to statics
};

Most bindings (let bindings, argument bindings, pattern bindings, owned captures, etc.) are "owned" by a particular execution of a scope, which belongs to a particular thread. A &binding T can only be created inside this scope, so &binding T&T is always safe, trading the scope-ownership for the thread-ownership. (Here, I'm considering borrowed closure/coroutine captures as just syntax sugar for references created when the closure/coroutine instance is created.)

However, static bindings are the exception to this rule, being accessible by &binding from any thread. Thus, since &binding T&T is still a safe operation, but &binding T has no scope-ownership, &T cannot imply any kind of thread-ownership in general. This is why we require T: Sync to create a static binding. But with this bug, const-eval is allowed to create an internal &binding T safely accessible from any thread, breaking &T's thread-ownership that one would expect from T: !Sync.

RalfJung commented 9 months ago

Ah, I thought you were referring to the rules for infallible promotion. If a type has public fields, then an external user can create a static-promoted instance of it. But if a type has private fields and can only be created with a pub const fn call, then an external user cannot create a static-promoted instance of it, and this bug does not apply.

Sadly const fn calls do get promoted in const/static initializer bodies. (Cc https://github.com/rust-lang/rust/issues/80619)

Otherwise, maybe there'd be some trick hiding here to rationalize the current behavior...

I don't see how &mut Foo → &Foo is relevant here?

It is relevant insofar as in my model (RustBelt), that is the only way to create a shared reference. It always starts with a mutable reference that is then used to initialize "sharing for a given lifetime". The usual problem when defining an invariant is to ensure that the initial state satisfies the invariant; in this case, that lemma represents the initial state.

So in my model, when you have let x = ...; &x, what logically happens is &(*&mut x). x never actually gets mutated of course, but given that &mut T → &T is possible, it is convenient to only have a single way of creating a shared reference, rather than two.

LegionMammal978 commented 9 months ago

So in my model, when you have let x = ...; &x, what logically happens is &(*&mut x). x never actually gets mutated of course, but given that &mut T → &T is possible, it is convenient to only have a single way of creating a shared reference, rather than two.

Given that you can take multiple shared references to a binding at the same time, isn't that only a valid transformation given Tree Borrows–like mutable references, which aren't active until written to? And in that case, won't &mut T no longer imply exclusivity just by its existence?