Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow matching on 3+ variant niche-encoded enums to optimize better #139729

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

scottmcm
Copy link
Member

@scottmcm scottmcm commented Apr 13, 2025

While the two-variant case is most common (and already special-cased), it's pretty unusual to actually need the fully-general niche-decoding algorithm (that handles things like 200+ variants wrapping the encoding space and such).

Layout puts the niche-encoded variants on one end of the natural values, so because enums don't have that many variants, it's quite common that there's no wrapping because the handful of variants just end up after the end of the bool or char or newtype_index! or whatever.

This PR thus looks for those cases: situations where the tag's range doesn't actually wrap, and thus we can check for niche-vs-untag in one simple icmp without needing to adjust the tag value, and by picking between zero- and sign-extension based on which kind of non-wrapping it is, also help LLVM better understand by not forcing it to think about wrapping arithmetic either.

It also emits the operations in a more optimization-friendly order. While the MIR Rvalue calculates a discriminant, so that's what we emit, code normally doesn't actually care about the actual discriminant for these niche-encoded enums. Rather, the discriminant is just getting passed to an equality check (for something like matches!(foo, TerminatorKind::Goto { .. }) or a SwitchInt (when it's being matched on).

So while the old code would emit, roughly

if is_niche { tag + ADJUSTMENT } else { UNTAGGED_DISCR }

this PR changes it instead to

(if is_niche { tag } else { UNTAGGED_ADJ_DISCR }) + ADJUSTMENT

which on its own might seem odd, but it's actually easier to optimize because what we're actually doing is

complicated_stuff() + ADJUSTMENT == 4

or

match complicated_stuff() + ADJUSTMENT { 0 =>…, 1 =>  …, 2 => …, _ => unreachable }

or in the generated PartialEq for enums with fieldless variants,

complicated_stuff(a) + ADJUSTMENT == complicated_stuff(b) + ADJUSTMENT

and thus that's easy for the optimizer to eliminate the additions:

complicated_stuff() == 2
match complicated_stuff() { 7 => …, 8 => …, 9 => …, _ => unreachable }
complicated_stuff(a) == complicated_stuff(b)

For good measure I went and made sure that cranelift can do this optimization too 🙂 bytecodealliance/wasmtime#10489

r? WaffleLapkin
Follow-up to #139098

--

EDIT later: I happened to notice #110197 (comment) -- it looks like there used to be some optimizations in this code, but they got removed for being wrong. I've added lots of tests here; let's hope I can avoid that fate 😬

(Certainly it would be possible to save some complexity by restricting this to the easy case, where it's unsigned-nowrap, the niches are after the natural payload, and all the variant indexes are small.)

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Apr 13, 2025
@rustbot
Copy link
Collaborator

rustbot commented Apr 13, 2025

Some changes occurred in compiler/rustc_codegen_ssa

cc @WaffleLapkin

Comment on lines 460 to 472
// CHECK-LABEL: define noundef{{( range\(i8 [0-9]+, [0-9]+\))?}} i8 @match5(i8{{.+}}%0)
// CHECK-NEXT: start:
// CHECK-NEXT: %[[REL_VAR:.+]] = add{{( nsw)?}} i8 %0, -2
// CHECK-NEXT: %[[REL_VAR_WIDE:.+]] = zext i8 %[[REL_VAR]] to i64
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp ult i8 %[[REL_VAR]], 3
// CHECK-NEXT: %[[NOT_IMPOSSIBLE:.+]] = icmp ne i8 %[[REL_VAR]], 1
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp samesign ugt i8 %0, 1
// CHECK-NEXT: %[[NOT_IMPOSSIBLE:.+]] = icmp ne i8 %0, 3
// CHECK-NEXT: call void @llvm.assume(i1 %[[NOT_IMPOSSIBLE]])
// CHECK-NEXT: %[[NICHE_DISCR:.+]] = add nuw nsw i64 %[[REL_VAR_WIDE]], 257
// CHECK-NEXT: %[[DISCR:.+]] = select i1 %[[IS_NICHE]], i64 %[[NICHE_DISCR]], i64 258
// CHECK-NEXT: switch i64 %[[DISCR]],
// CHECK-NEXT: i64 257,
// CHECK-NEXT: i64 258,
// CHECK-NEXT: i64 259,
// CHECK-NEXT: %[[ADJ_DISCR:.+]] = select i1 %[[IS_NICHE]], i8 %0, i8 3
// CHECK-NEXT: switch i8 %[[ADJ_DISCR]], label %[[UNREACHABLE:.+]] [
// CHECK-NEXT: i8 2,
// CHECK-NEXT: i8 3,
// CHECK-NEXT: i8 4,
// CHECK-NEXT: ]
// CHECK: [[UNREACHABLE]]:
// CHECK-NEXT: unreachable
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is, perhaps, my favourite demonstration of the improvement from this PR.

LLVM used to not really be able to improve anything, leading it to still need to extend up to 64-bit and match 257/258/259.

But now what we emit is straight-forward enough that LLVM sees enough about what's going on to remove a bunch of the extra stuff, letting it just match in the original tag width i8 as simply 2/3/4 -- where that 2 and 4 are the values directly stored in the input that didn't need to be adjusted at all.

Comment on lines -63 to +64
// CHECK: %[[IS_NONE:.+]] = icmp eq i8 %[[RAW]], 2
// CHECK: %[[OPT_DISCR:.+]] = select i1 %[[IS_NONE]], i64 0, i64 1
// CHECK: %[[IS_SOME:.+]] = icmp ne i8 %[[RAW]], 2
// CHECK: %[[OPT_DISCR:.+]] = zext i1 %[[IS_SOME]] to i64
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since I was splitting the 2-variant case out to be even more special-cased than it was before, I figured I might as well take @WaffleLapkin's suggestion from #139098 (comment) and change the polarity so Option<T> (and Result<(), T>) can just emit a zext here instead of the select.

Comment on lines 40 to +49
// CHECK-LABEL: define noundef{{( range\(i8 [0-9]+, [0-9]+\))?}} i8 @match1(i8{{.+}}%0)
// CHECK-NEXT: start:
// CHECK-NEXT: %[[REL_VAR:.+]] = add{{( nsw)?}} i8 %0, -2
// CHECK-NEXT: %[[REL_VAR_WIDE:.+]] = zext i8 %[[REL_VAR]] to i64
// CHECK-NEXT: %[[IS_NICHE:.+]] = icmp ult i8 %[[REL_VAR]], 2
// CHECK-NEXT: %[[NICHE_DISCR:.+]] = add nuw nsw i64 %[[REL_VAR_WIDE]], 1
// CHECK-NEXT: %[[DISCR:.+]] = select i1 %[[IS_NICHE]], i64 %[[NICHE_DISCR]], i64 0
// CHECK-NEXT: switch i64 %[[DISCR]]
// CHECK-NEXT: %[[ADJ_DISCR:.+]] = tail call i8 @llvm.umax.i8(i8 %0, i8 1)
// CHECK-NEXT: switch i8 %[[ADJ_DISCR]], label %[[UNREACHABLE:.+]] [
// CHECK-NEXT: i8 1,
// CHECK-NEXT: i8 2,
// CHECK-NEXT: i8 3,
// CHECK-NEXT: ]
// CHECK: [[UNREACHABLE]]:
// CHECK-NEXT: unreachable
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one's also fun: before it had a whole complicated dance to match on 0_isize/1_isize/2_isize, but with things being simpler it now just notices (even in O1) that "hey, all I need is a umax and I can match on 1_i8/2_i8/3_i8!"

@rust-log-analyzer

This comment has been minimized.

@rust-log-analyzer

This comment has been minimized.

@Mark-Simulacrum
Copy link
Member

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Apr 13, 2025
bors added a commit to rust-lang-ci/rust that referenced this pull request Apr 13, 2025
Allow matching on 3+ variant niche-encoded enums to optimize better

While the two-variant case is most common (and already special-cased), it's pretty unusual to actually need the *fully-general* niche-decoding algorithm (that handles things like 200+ variants wrapping the encoding space and such).

Layout puts the niche-encoded variants on one end of the natural values, so because enums don't have that many variants, it's quite common that there's no wrapping because the handful of variants just end up after the end of the `bool` or `char` or `newtype_index!` or whatever.

This PR thus looks for those cases: situations where the tag's range doesn't actually wrap, and thus we can check for niche-vs-untag in one simple `icmp` without needing to adjust the tag value, and by picking between zero- and sign-extension based on *which* kind of non-wrapping it is, also help LLVM better understand by not forcing it to think about wrapping arithmetic either.

It also emits the operations in a more optimization-friendly order.  While the MIR Rvalue calculates a discriminant, so that's what we emit, code normally doesn't actually care about the actual discriminant for these niche-encoded enums.  Rather, the discriminant is just getting passed to an equality check (for something like `matches!(foo, TerminatorKind::Goto { .. }`) or a `SwitchInt` (when it's being matched on).

So while the old code would emit, roughly
```rust
if is_niche { tag + ADJUSTMENT } else { UNTAGGED_DISCR }
```
this PR changes it instead to
```rust
(if is_niche { tag } else { UNTAGGED_ADJ_DISCR }) + ADJUSTMENT
```
which on its own might seem odd, but it's actually easier to optimize because what we're actually doing is
```rust
complicated_stuff() + ADJUSTMENT == 4
```
or
```rust
match complicated_stuff() + ADJUSTMENT { 0 =>…, 1 =>  …, 2 => …, _ => unreachable }
```
or in the generated `PartialEq` for enums with fieldless variants,
```rust
complicated_stuff(a) + ADJUSTMENT == complicated_stuff(b) + ADJUSTMENT
```
and thus that's easy for the optimizer to eliminate the additions:
```rust
complicated_stuff() == 2
```
```rust
match complicated_stuff() { 7 => …, 8 => …, 9 => …, _ => unreachable }
```
```rust
complicated_stuff(a) == complicated_stuff(b)
```

For good measure I went and made sure that cranelift can do this optimization too 🙂 bytecodealliance/wasmtime#10489

r? WaffleLapkin
Follow-up to rust-lang#139098

--

EDIT later: I happened to notice rust-lang#110197 (comment) -- it looks like there used to be some optimizations in this code, but they got removed for being wrong.  I've added lots of tests here; let's hope I can avoid that fate 😬

(Certainly it would be possible to save some complexity by restricting this to the easy case, where it's unsigned-nowrap, the niches are after the natural payload, and all the variant indexes are small.)
@bors
Copy link
Collaborator

bors commented Apr 13, 2025

⌛ Trying commit 66ddcbf with merge 98ab2de...

@bors
Copy link
Collaborator

bors commented Apr 13, 2025

☀️ Try build successful - checks-actions
Build commit: 98ab2de (98ab2de9448beec4c7fd51cd89043c04c0e88d40)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (98ab2de): comparison URL.

Overall result: ❌✅ regressions and improvements - please read the text below

Benchmarking this pull request likely means that it is perf-sensitive, so we're automatically marking it as not fit for rolling up. While you can manually mark this PR as fit for rollup, we strongly recommend not doing so since this PR may lead to changes in compiler perf.

Next Steps: If you can justify the regressions found in this try perf run, please indicate this with @rustbot label: +perf-regression-triaged along with sufficient written justification. If you cannot justify the regressions please fix the regressions and do another perf run. If the next run shows neutral or positive results, the label will be automatically removed.

@bors rollup=never
@rustbot label: -S-waiting-on-perf +perf-regression

Instruction count

This is the most reliable metric that we have; it was used to determine the overall result at the top of this comment. However, even this metric can sometimes exhibit noise.

mean range count
Regressions ❌
(primary)
0.9% [0.9%, 0.9%] 1
Regressions ❌
(secondary)
2.0% [1.4%, 2.4%] 3
Improvements ✅
(primary)
-0.6% [-1.2%, -0.3%] 4
Improvements ✅
(secondary)
-0.9% [-1.3%, -0.7%] 6
All ❌✅ (primary) -0.3% [-1.2%, 0.9%] 5

Max RSS (memory usage)

Results (secondary 1.9%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
1.9% [1.9%, 1.9%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) - - 0

Cycles

Results (secondary 0.8%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
2.4% [2.4%, 2.5%] 2
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.5% [-2.5%, -2.5%] 1
All ❌✅ (primary) - - 0

Binary size

Results (primary 0.0%, secondary -0.0%)

This is a less reliable metric that may be of interest but was not used to determine the overall result at the top of this comment.

mean range count
Regressions ❌
(primary)
0.7% [0.7%, 0.7%] 3
Regressions ❌
(secondary)
1.2% [1.2%, 1.2%] 3
Improvements ✅
(primary)
-0.0% [-0.1%, -0.0%] 53
Improvements ✅
(secondary)
-0.1% [-0.3%, -0.0%] 45
All ❌✅ (primary) 0.0% [-0.1%, 0.7%] 56

Bootstrap: 779.489s -> 779.42s (-0.01%)
Artifact size: 365.53 MiB -> 365.36 MiB (-0.05%)

@rustbot rustbot added perf-regression Performance regression. and removed S-waiting-on-perf Status: Waiting on a perf run to be completed. labels Apr 13, 2025
if niche_variants.contains(&untagged_variant)
&& bx.cx().sess().opts.optimize != OptLevel::No
let is_natural = bx.icmp(IntPredicate::IntNE, tag, niche_start);
return if untagged_variant == VariantIdx::from_u32(1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think this will be cleaner as a guard, i.e. return if a { b } else { c } -> if a { return b }; return c

// Work in whichever size is wider, because it's possible for
// the untagged variant to be further away from the niches than
// is possible to represent in the smaller type.
let (wide_size, wide_ibty) = if cast_to_layout.size > tag_size {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume cast_to can be both wider and thinner than the "natural" tag size, to support as u8, as u128? Somewhat surprised that we don't just always return the natural type and let the caller deal with it...


let opt_data = if tag_range.no_unsigned_wraparound(tag_size) == Ok(true) {
let wide_tag = bx.zext(tag, wide_ibty);
let extend = |x| x;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is to be more similar to the signed case, but if so, can we refactor this out into a function?

Comment on lines +583 to +600
let opt_data = if tag_range.no_unsigned_wraparound(tag_size) == Ok(true) {
let wide_tag = bx.zext(tag, wide_ibty);
let extend = |x| x;
let wide_niche_start = extend(niche_start);
let wide_niche_end = extend(niche_end);
debug_assert!(wide_niche_start <= wide_niche_end);
let wide_first_variant = extend(first_variant);
let wide_untagged_variant = extend(untagged_variant);
let wide_niche_to_variant =
wide_first_variant.wrapping_sub(wide_niche_start);
let wide_niche_untagged = wide_size
.truncate(wide_untagged_variant.wrapping_sub(wide_niche_to_variant));
let (is_niche, needs_assume) = if tag_range.start == niche_start {
let end = bx.cx().const_uint_big(tag_llty, niche_end);
(
bx.icmp(IntPredicate::IntULE, tag, end),
wide_niche_untagged <= wide_niche_end,
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to review this, the amount of similarly named variables just overflows my cache :(

@WaffleLapkin
Copy link
Member

I'm generally a bit concerned that this becomes almost a 300 lines function. It's hard to follow what happens there, especially because it seems like different iterations used different names for the same things/there is no consistent terminology. Will try to do a second review pass later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf-regression Performance regression. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants