When merging candidates for trait and normalization goals, we incompletely prefer candidates from the environment, i.e. ParamEnv and AliasBound candidates.
https://github.com/rust-lang/rust/blob/1a449dcfd25143f7e1f6b6f5ddf1c12af361e2ff/compiler/rustc_trait_selection/src/solve/assembly/mod.rs#L778-L793
Why
AliasBound candidates
We have to prefer AliasBound for opaques as self types to prefer the item bounds of opaque types over blanket impls:
fn impl_trait() -> impl Into<u32> {
0u16
}
fn main() {
// There are two possible types for `x`:
// - `u32` by using the "alias bound" of `impl Into<u32>`
// - `impl Into<u32>`, i.e. `u16`, by using `impl<T> From<T> for T`
//
// We infer the type of `x` to be `u32` here as it is highly likely
// that this is expected by the user.
let x = impl_trait().into();
println!("{}", std::mem::size_of_val(&x));
}
If we do not prefer alias bounds this example would break with the new solver. For this we need to prefer them even if they constrain non-region inference variables. There are a few existing UI tests which depend on this behavior.
TODO: The issue is even bigger for opaque uses in the defining scope.
The same pattern also exists for projections. We should probably also prefer those:
trait Trait {
type Assoc: Into<u32>;
}
impl<T: Into<u32>> Trait for T {
type Assoc = T;
}
fn prefer_alias_bound<T: Trait>(x: T::Assoc) {
// There are two possible types for `x`:
// - `u32` by using the "alias bound" of `<T as Trait>::Assoc`
// - `<T as Trait>::Assoc`, i.e. `u16`, by using `impl<T> From<T> for T`
//
// We infer the type of `x` to be `u32` here as it is highly likely
// that this is expected by the user.
let x = x.into();
println!("{}", std::mem::size_of_val(&x));
}
fn main() {
prefer_alias_bound::<u16>(0);
}
ParamEnv candidates
We need to prefer ParamEnv candidates which only guide region inference as otherwise impls fail their WF check: ui test
trait Bar<'a> {}
impl<T> Bar<'static> for T {}
trait Foo<'a> {}
// We have to prove `T: Foo<'a>` given `T: Bar<'a>`. We have two candidates:
// - `T: Bar<'a>` candidate from the environment
// - `impl<T> Bar<'static> for T` impl candidate
//
// The concept of "prefering candidates with no constraints" breaks once we introduce
// regions, as the trait solver does not know whether a given constraint is a noop.
impl<'a, T: Bar<'a>> Foo<'a> for T {}
fn main() {}
When merging candidates for trait and normalization goals, we incompletely prefer candidates from the environment, i.e.
ParamEnvandAliasBoundcandidates.https://github.com/rust-lang/rust/blob/1a449dcfd25143f7e1f6b6f5ddf1c12af361e2ff/compiler/rustc_trait_selection/src/solve/assembly/mod.rs#L778-L793
Why
AliasBoundcandidatesWe have to prefer
AliasBoundfor opaques as self types to prefer the item bounds of opaque types over blanket impls:If we do not prefer alias bounds this example would break with the new solver. For this we need to prefer them even if they constrain non-region inference variables. There are a few existing UI tests which depend on this behavior.
TODO: The issue is even bigger for opaque uses in the defining scope.
The same pattern also exists for projections. We should probably also prefer those:
ParamEnvcandidatesWe need to prefer
ParamEnvcandidates which only guide region inference as otherwise impls fail their WF check: ui test