I have an enum with ~250 variants, all of which are nullary except one which contains a String: http://sfackler.github.io/doc/postgres/enum.SqlState.html. The optimized eq and ne implementations generated by deriving are pretty awful: https://gist.github.com/sfackler/d14a653dfc8affd11e99. It's compiling the huge match generated by deriving into an enormous if, elseif chain.
The ideal implementation seems like it would be
impl PartialEq for SqlState {
fn eq(&self, other: &SqlState) -> bool {
match (*self, *other) {
(SqlState::Unknown(ref a), SqlState::Unknown(ref b)) => a == b,
_ => magic_tag_extraction_intrinsic(self) == magic_tag_extraction_intrinsic(other)
}
}
}
Would it make sense to add that intrinisic and teach deriving to use it for all nullary variants? Can we instead teach trans/LLVM to be smarter about analyzing enormous match statements?
cc @Aatch
I have an enum with ~250 variants, all of which are nullary except one which contains a
String: http://sfackler.github.io/doc/postgres/enum.SqlState.html. The optimizedeqandneimplementations generated by deriving are pretty awful: https://gist.github.com/sfackler/d14a653dfc8affd11e99. It's compiling the huge match generated by deriving into an enormous if, elseif chain.The ideal implementation seems like it would be
Would it make sense to add that intrinisic and teach deriving to use it for all nullary variants? Can we instead teach trans/LLVM to be smarter about analyzing enormous match statements?
cc @Aatch