-
Notifications
You must be signed in to change notification settings - Fork 13.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow matching on 3+ variant niche-encoded enums to optimize better #139729
base: master
Are you sure you want to change the base?
Conversation
Some changes occurred in compiler/rustc_codegen_ssa |
tests/codegen/enum/enum-match.rs
Outdated
// CHECK-LABEL: define noundef{{( range\(i8 [0-9]+, [0-9]+\))?}} i8 @match5(i8{{.+}}%0) | ||
// CHECK-NEXT: start: | ||
// CHECK-NEXT: %[[REL_VAR:.+]] = add{{( nsw)?}} i8 %0, -2 | ||
// CHECK-NEXT: %[[REL_VAR_WIDE:.+]] = zext i8 %[[REL_VAR]] to i64 | ||
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp ult i8 %[[REL_VAR]], 3 | ||
// CHECK-NEXT: %[[NOT_IMPOSSIBLE:.+]] = icmp ne i8 %[[REL_VAR]], 1 | ||
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp samesign ugt i8 %0, 1 | ||
// CHECK-NEXT: %[[NOT_IMPOSSIBLE:.+]] = icmp ne i8 %0, 3 | ||
// CHECK-NEXT: call void @llvm.assume(i1 %[[NOT_IMPOSSIBLE]]) | ||
// CHECK-NEXT: %[[NICHE_DISCR:.+]] = add nuw nsw i64 %[[REL_VAR_WIDE]], 257 | ||
// CHECK-NEXT: %[[DISCR:.+]] = select i1 %[[IS_NICHE]], i64 %[[NICHE_DISCR]], i64 258 | ||
// CHECK-NEXT: switch i64 %[[DISCR]], | ||
// CHECK-NEXT: i64 257, | ||
// CHECK-NEXT: i64 258, | ||
// CHECK-NEXT: i64 259, | ||
// CHECK-NEXT: %[[ADJ_DISCR:.+]] = select i1 %[[IS_NICHE]], i8 %0, i8 3 | ||
// CHECK-NEXT: switch i8 %[[ADJ_DISCR]], label %[[UNREACHABLE:.+]] [ | ||
// CHECK-NEXT: i8 2, | ||
// CHECK-NEXT: i8 3, | ||
// CHECK-NEXT: i8 4, | ||
// CHECK-NEXT: ] | ||
// CHECK: [[UNREACHABLE]]: | ||
// CHECK-NEXT: unreachable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is, perhaps, my favourite demonstration of the improvement from this PR.
LLVM used to not really be able to improve anything, leading it to still need to extend up to 64-bit and match 257/258/259.
But now what we emit is straight-forward enough that LLVM sees enough about what's going on to remove a bunch of the extra stuff, letting it just match in the original tag width i8
as simply 2/3/4 -- where that 2 and 4 are the values directly stored in the input that didn't need to be adjusted at all.
// CHECK: %[[IS_NONE:.+]] = icmp eq i8 %[[RAW]], 2 | ||
// CHECK: %[[OPT_DISCR:.+]] = select i1 %[[IS_NONE]], i64 0, i64 1 | ||
// CHECK: %[[IS_SOME:.+]] = icmp ne i8 %[[RAW]], 2 | ||
// CHECK: %[[OPT_DISCR:.+]] = zext i1 %[[IS_SOME]] to i64 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since I was splitting the 2-variant case out to be even more special-cased than it was before, I figured I might as well take @WaffleLapkin's suggestion from #139098 (comment) and change the polarity so Option<T>
(and Result<(), T>
) can just emit a zext
here instead of the select
.
// CHECK-LABEL: define noundef{{( range\(i8 [0-9]+, [0-9]+\))?}} i8 @match1(i8{{.+}}%0) | ||
// CHECK-NEXT: start: | ||
// CHECK-NEXT: %[[REL_VAR:.+]] = add{{( nsw)?}} i8 %0, -2 | ||
// CHECK-NEXT: %[[REL_VAR_WIDE:.+]] = zext i8 %[[REL_VAR]] to i64 | ||
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp ult i8 %[[REL_VAR]], 2 | ||
// CHECK-NEXT: %[[NICHE_DISCR:.+]] = add nuw nsw i64 %[[REL_VAR_WIDE]], 1 | ||
// CHECK-NEXT: %[[DISCR:.+]] = select i1 %[[IS_NICHE]], i64 %[[NICHE_DISCR]], i64 0 | ||
// CHECK-NEXT: switch i64 %[[DISCR]] | ||
// CHECK-NEXT: %[[ADJ_DISCR:.+]] = tail call i8 @llvm.umax.i8(i8 %0, i8 1) | ||
// CHECK-NEXT: switch i8 %[[ADJ_DISCR]], label %[[UNREACHABLE:.+]] [ | ||
// CHECK-NEXT: i8 1, | ||
// CHECK-NEXT: i8 2, | ||
// CHECK-NEXT: i8 3, | ||
// CHECK-NEXT: ] | ||
// CHECK: [[UNREACHABLE]]: | ||
// CHECK-NEXT: unreachable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one's also fun: before it had a whole complicated dance to match on 0_isize
/1_isize
/2_isize
, but with things being simpler it now just notices (even in O1) that "hey, all I need is a umax
and I can match on 1_i8
/2_i8
/3_i8
!"
This comment has been minimized.
This comment has been minimized.
ab48366
to
5cd19b8
Compare
This comment has been minimized.
This comment has been minimized.
5cd19b8
to
66ddcbf
Compare
@bors try @rust-timer queue |
This comment has been minimized.
This comment has been minimized.
Allow matching on 3+ variant niche-encoded enums to optimize better While the two-variant case is most common (and already special-cased), it's pretty unusual to actually need the *fully-general* niche-decoding algorithm (that handles things like 200+ variants wrapping the encoding space and such). Layout puts the niche-encoded variants on one end of the natural values, so because enums don't have that many variants, it's quite common that there's no wrapping because the handful of variants just end up after the end of the `bool` or `char` or `newtype_index!` or whatever. This PR thus looks for those cases: situations where the tag's range doesn't actually wrap, and thus we can check for niche-vs-untag in one simple `icmp` without needing to adjust the tag value, and by picking between zero- and sign-extension based on *which* kind of non-wrapping it is, also help LLVM better understand by not forcing it to think about wrapping arithmetic either. It also emits the operations in a more optimization-friendly order. While the MIR Rvalue calculates a discriminant, so that's what we emit, code normally doesn't actually care about the actual discriminant for these niche-encoded enums. Rather, the discriminant is just getting passed to an equality check (for something like `matches!(foo, TerminatorKind::Goto { .. }`) or a `SwitchInt` (when it's being matched on). So while the old code would emit, roughly ```rust if is_niche { tag + ADJUSTMENT } else { UNTAGGED_DISCR } ``` this PR changes it instead to ```rust (if is_niche { tag } else { UNTAGGED_ADJ_DISCR }) + ADJUSTMENT ``` which on its own might seem odd, but it's actually easier to optimize because what we're actually doing is ```rust complicated_stuff() + ADJUSTMENT == 4 ``` or ```rust match complicated_stuff() + ADJUSTMENT { 0 =>…, 1 => …, 2 => …, _ => unreachable } ``` or in the generated `PartialEq` for enums with fieldless variants, ```rust complicated_stuff(a) + ADJUSTMENT == complicated_stuff(b) + ADJUSTMENT ``` and thus that's easy for the optimizer to eliminate the additions: ```rust complicated_stuff() == 2 ``` ```rust match complicated_stuff() { 7 => …, 8 => …, 9 => …, _ => unreachable } ``` ```rust complicated_stuff(a) == complicated_stuff(b) ``` For good measure I went and made sure that cranelift can do this optimization too 🙂 bytecodealliance/wasmtime#10489 r? WaffleLapkin Follow-up to rust-lang#139098 -- EDIT later: I happened to notice rust-lang#110197 (comment) -- it looks like there used to be some optimizations in this code, but they got removed for being wrong. I've added lots of tests here; let's hope I can avoid that fate 😬 (Certainly it would be possible to save some complexity by restricting this to the easy case, where it's unsigned-nowrap, the niches are after the natural payload, and all the variant indexes are small.)
☀️ Try build successful - checks-actions |
This comment has been minimized.
This comment has been minimized.
Finished benchmarking commit (98ab2de): comparison URL. Overall result: ❌✅ regressions and improvements - please read the text belowBenchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf. Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @bors rollup=never Instruction countThis is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.
Max RSS (memory usage)Results (secondary 1.9%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
CyclesResults (secondary 0.8%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Binary sizeResults (primary 0.0%, secondary -0.0%)This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.
Bootstrap: 779.489s -> 779.42s (-0.01%) |
if niche_variants.contains(&untagged_variant) | ||
&& bx.cx().sess().opts.optimize != OptLevel::No | ||
let is_natural = bx.icmp(IntPredicate::IntNE, tag, niche_start); | ||
return if untagged_variant == VariantIdx::from_u32(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think this will be cleaner as a guard, i.e. return if a { b } else { c }
-> if a { return b }; return c
// Work in whichever size is wider, because it's possible for | ||
// the untagged variant to be further away from the niches than | ||
// is possible to represent in the smaller type. | ||
let (wide_size, wide_ibty) = if cast_to_layout.size > tag_size { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume cast_to
can be both wider and thinner than the "natural" tag size, to support as u8
, as u128
? Somewhat surprised that we don't just always return the natural type and let the caller deal with it...
|
||
let opt_data = if tag_range.no_unsigned_wraparound(tag_size) == Ok(true) { | ||
let wide_tag = bx.zext(tag, wide_ibty); | ||
let extend = |x| x; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is to be more similar to the signed case, but if so, can we refactor this out into a function?
let opt_data = if tag_range.no_unsigned_wraparound(tag_size) == Ok(true) { | ||
let wide_tag = bx.zext(tag, wide_ibty); | ||
let extend = |x| x; | ||
let wide_niche_start = extend(niche_start); | ||
let wide_niche_end = extend(niche_end); | ||
debug_assert!(wide_niche_start <= wide_niche_end); | ||
let wide_first_variant = extend(first_variant); | ||
let wide_untagged_variant = extend(untagged_variant); | ||
let wide_niche_to_variant = | ||
wide_first_variant.wrapping_sub(wide_niche_start); | ||
let wide_niche_untagged = wide_size | ||
.truncate(wide_untagged_variant.wrapping_sub(wide_niche_to_variant)); | ||
let (is_niche, needs_assume) = if tag_range.start == niche_start { | ||
let end = bx.cx().const_uint_big(tag_llty, niche_end); | ||
( | ||
bx.icmp(IntPredicate::IntULE, tag, end), | ||
wide_niche_untagged <= wide_niche_end, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how to review this, the amount of similarly named variables just overflows my cache :(
I'm generally a bit concerned that this becomes almost a 300 lines function. It's hard to follow what happens there, especially because it seems like different iterations used different names for the same things/there is no consistent terminology. Will try to do a second review pass later. |
While the two-variant case is most common (and already special-cased), it's pretty unusual to actually need the fully-general niche-decoding algorithm (that handles things like 200+ variants wrapping the encoding space and such).
Layout puts the niche-encoded variants on one end of the natural values, so because enums don't have that many variants, it's quite common that there's no wrapping because the handful of variants just end up after the end of the
bool
orchar
ornewtype_index!
or whatever.This PR thus looks for those cases: situations where the tag's range doesn't actually wrap, and thus we can check for niche-vs-untag in one simple
icmp
without needing to adjust the tag value, and by picking between zero- and sign-extension based on which kind of non-wrapping it is, also help LLVM better understand by not forcing it to think about wrapping arithmetic either.It also emits the operations in a more optimization-friendly order. While the MIR Rvalue calculates a discriminant, so that's what we emit, code normally doesn't actually care about the actual discriminant for these niche-encoded enums. Rather, the discriminant is just getting passed to an equality check (for something like
matches!(foo, TerminatorKind::Goto { .. }
) or aSwitchInt
(when it's being matched on).So while the old code would emit, roughly
this PR changes it instead to
which on its own might seem odd, but it's actually easier to optimize because what we're actually doing is
or
or in the generated
PartialEq
for enums with fieldless variants,and thus that's easy for the optimizer to eliminate the additions:
For good measure I went and made sure that cranelift can do this optimization too 🙂 bytecodealliance/wasmtime#10489
r? WaffleLapkin
Follow-up to #139098
--
EDIT later: I happened to notice #110197 (comment) -- it looks like there used to be some optimizations in this code, but they got removed for being wrong. I've added lots of tests here; let's hope I can avoid that fate 😬
(Certainly it would be possible to save some complexity by restricting this to the easy case, where it's unsigned-nowrap, the niches are after the natural payload, and all the variant indexes are small.)