Open Diggsey opened 4 years ago
I agree that this is a principle we should uphold, with one caveat: lifetimes can influence trait resolution, and thus the code that gets executed, and of course that is UB-relevant.
Moreover, I don't know if or to what extend lifetime influence MIR building, in particular deciding when temporaries get dropped and how and where Storage*
annotations are placed (which control when stack variables may be accessed). But I think this is also lifetime-independent currently.
But after name, trait and method resolution, the MIR semantics should be entirely independent of lifetimes. Certainly that is true for Miri right now (Miri runs lifetime-erased MIR). That also means lifetimes are useless for optimizations.
However, no such principle has been officially enshrined by the lang team, as far as I know. It it just a personal opinion.
Cc @rust-lang/wg-mir-opt
lifetimes can influence trait resolution
Do you mean lifetimes bound in higher-ranked types?
All other lifetimes can't affect impl
selection, they're only imposed as restrictions once an impl
has been found (this also means two impl
s overlap if they only differ in non-late-bound-inside-higher-ranked-types lifetimes).
cc @nikomatsakis since I don't know what's planed/desired around the higher-ranked cases.
However, no such principle has been officially enshrined by the lang team, as far as I know.
The closest we've been (assuming you are correct) was trait specialization, because there are soundness implications there.
All other lifetimes can't affect impl selection, they're only imposed as restrictions once an impl has been found
Oh, I thought they could affect selection... but anyway the high-level point is, lifetimes are not erased for trait/method resolution, so we cannot entirely ignore them when figuring out the semantics of surface Rust.
but anyway the high-level point is, lifetimes are not erased for trait/method resolution
Sorry, I didn't have this on hand earlier: they are.
We still have to figure out what the impl
's lifetime parameters map back to, in order to propagate the lifetimes correctly, but that's only for restricting lifetime inference.
The choice of impl
is entirely predicated on the lifetime-erased type (again, modulo the strange higher-ranked situation).
Oh. Okay. Today I learned.
I wonder what the action item is here. What place would be the right one to say that lifetimes do not affect optimizations, and how would we make that statement precise enough to be useful and not constrain further development of the language?
I think the answer to this is fairly obvious, but I couldn't see it stated anywhere. Is it the case that lifetimes are only relevant for determining whether code is safe and not whether it is valid? In other words, lifetimes (or incorrect lifetimes) can never cause UB by themselves.
Furthermore, the compiler can never optimise something based on eg. the knowledge that a reference has
'static
lifetime, and transmuting between different lifetimes is always OK as long as the stacked borrows rules are followed at runtime.