While investigating rust-lang/rust#112583, it seems that it reveals a bug in the new solver rather than being fixed by it. The new solver accepts this:
trait Trait {
type Ty;
}
impl<T> Trait for T {
type Ty = ();
}
fn test<T: Trait>() {
let _: <T as Trait>::Ty = ();
}
It uses the impl candidate for normalization instead of the ParamEnv candidate.
This means that we're using a different candidate when solving T: Trait than when normalizing <T as Trait>::Ty. I don't think it is unsound but it can lead to some surprising behavior like this requiring 'a == 'static despite using the impl candidate for normalization:
trait Trait<'a> {
type Ty;
}
impl<T> Trait<'_> for T {
type Ty = ();
}
fn test<'a, T: Trait<'static>>() {
let _: <T as Trait<'a>>::Ty = ();
//~^ ERROR lifetime may not live long enough
}
While investigating rust-lang/rust#112583, it seems that it reveals a bug in the new solver rather than being fixed by it. The new solver accepts this:
It uses the impl candidate for normalization instead of the ParamEnv candidate.
This means that we're using a different candidate when solving
T: Trait
than when normalizing<T as Trait>::Ty
. I don't think it is unsound but it can lead to some surprising behavior like this requiring'a == 'static
despite using the impl candidate for normalization: