Skip to content

Only work-steal in the main loop for rustc_thread_pool #143035

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jul 7, 2025

Conversation

ywxt
Copy link
Contributor

@ywxt ywxt commented Jun 26, 2025

This PR is a replica of rust-lang/rustc-rayon#12 that only retained work-steal in the main loop for rustc_thread_pool.

r? @oli-obk

cc @SparrowLii @Zoxc @cuviper

Updates #113349

@rustbot rustbot added S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. labels Jun 26, 2025
@rustbot
Copy link
Collaborator

rustbot commented Jun 26, 2025

These commits modify the Cargo.lock file. Unintentional changes to Cargo.lock can be introduced when switching branches and rebasing PRs.

If this was unintentional then you should revert the changes before this PR is merged.
Otherwise, you can ignore this comment.

@@ -52,6 +54,12 @@ struct ScopeBase<'scope> {
/// latch to track job counts
job_completed_latch: CountLatch,

/// Jobs that have been spawned, but not yet started.
pending_jobs: Mutex<IndexSet<JobRefId>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this swapped to IndexSet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The lint is "prefer FxHashSet over HashSet, it has better performance".

Should I suppress it?

Copy link
Member

@SparrowLii SparrowLii Jun 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The impact on performance needs to be measured with rustc-perf. For now, we can keep the original implementation :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It has been restored.

@Zoxc
Copy link
Contributor

Zoxc commented Jun 26, 2025

You should add me as a co-author for proper copyright assignment.

@ywxt
Copy link
Contributor Author

ywxt commented Jun 26, 2025

You should add me as a co-author for proper copyright assignment.

How to do it? Sorry, I'm not familiar with it.

Done

@rustbot

This comment has been minimized.

@rustbot rustbot added has-merge-commits PR has merge commits, merge with caution. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Jun 26, 2025
@ywxt ywxt force-pushed the less-work-steal branch from 699035a to ccd9af7 Compare June 26, 2025 02:21
@rustbot rustbot removed has-merge-commits PR has merge commits, merge with caution. S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Jun 26, 2025
@@ -796,14 +797,83 @@ impl WorkerThread {
/// stealing tasks as necessary.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this comment, which contains "stealing tasks", is still correct after we removed work-steal.

@ywxt ywxt force-pushed the less-work-steal branch from a8a202d to 273c9b6 Compare June 26, 2025 06:55
@ywxt ywxt requested a review from Zoxc June 26, 2025 12:28
@ywxt ywxt force-pushed the less-work-steal branch from 273c9b6 to a0178fd Compare June 27, 2025 02:42
@ywxt ywxt force-pushed the less-work-steal branch from 486fb86 to 36462f9 Compare June 28, 2025 10:14
@oli-obk
Copy link
Contributor

oli-obk commented Jun 30, 2025

@bors try @rust-timer queue

@rust-timer

This comment has been minimized.

@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jun 30, 2025
bors added a commit that referenced this pull request Jun 30, 2025
Only work-steal in the main loop for rustc_thread_pool

This PR is a replica of <rust-lang/rustc-rayon#12> that only retained work-steal in the main loop for rustc_thread_pool.

r? `@oli-obk`

cc `@SparrowLii` `@Zoxc` `@cuviper`

Updates #113349
@bors
Copy link
Collaborator

bors commented Jun 30, 2025

⌛ Trying commit 36462f9 with merge 4fc7758...

@Kobzol
Copy link
Member

Kobzol commented Jun 30, 2025

Sorry about the build failure, that was a temporary bug. It shouldn't have affected any perf. numbers though.

@SparrowLii
Copy link
Member

SparrowLii commented Jul 1, 2025

AFAIK the perf tool only tests single-threaded scenarios, so there should not be such performance regresses. We need to identify the problem.

@SparrowLii
Copy link
Member

Once the perf tool is fixed, we can try again.

@Kobzol
Copy link
Member

Kobzol commented Jul 1, 2025

There is one benchmark that runs with 4 threads in rustc-perf now. We can't compare it by icount though, ofc.

@Kobzol
Copy link
Member

Kobzol commented Jul 1, 2025

@bors2 try @rust-timer queue

@rust-timer

This comment has been minimized.

@rust-bors
Copy link

rust-bors bot commented Jul 1, 2025

⌛ Trying commit 36462f9 with merge c41093d

To cancel the try build, run the command @bors2 try cancel.

rust-bors bot added a commit that referenced this pull request Jul 1, 2025
Only work-steal in the main loop for rustc_thread_pool

<!-- homu-ignore:start -->
<!--
If this PR is related to an unstable feature or an otherwise tracked effort,
please link to the relevant tracking issue here. If you don't know of a related
tracking issue or there are none, feel free to ignore this.

This PR will get automatically assigned to a reviewer. In case you would like
a specific user to review your work, you can assign it to them by using

    r? <reviewer name>
-->
<!-- homu-ignore:end -->

This PR is a replica of <rust-lang/rustc-rayon#12> that only retained work-steal in the main loop for rustc_thread_pool.

r? `@oli-obk`

cc `@SparrowLii` `@Zoxc` `@cuviper`

Updates #113349
@rustbot rustbot added the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jul 1, 2025
@rust-bors
Copy link

rust-bors bot commented Jul 1, 2025

☀️ Try build successful (CI)
Build commit: c41093d (c41093d894690518d74a99588aa7a0cc9c02c73b, parent: 6988a8fea774a2a20ebebddb7dbf15dd6ef594f9)

@rust-timer

This comment has been minimized.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (c41093d): comparison URL.

Overall result: no relevant changes - no action needed

Benchmarking this pull request means it may be perf-sensitive – we'll automatically label it not fit for rolling up. You can override this, but we strongly advise not to, due to possible changes in compiler perf.

@bors rollup=never
@rustbot label: -S-waiting-on-perf -perf-regression

Instruction count

This benchmark run did not return any relevant results for this metric.

Max RSS (memory usage)

Results (primary 3.0%, secondary 1.0%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
3.0% [2.6%, 3.6%] 3
Regressions ❌
(secondary)
2.6% [2.1%, 2.9%] 3
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-0.6% [-0.8%, -0.4%] 3
All ❌✅ (primary) 3.0% [2.6%, 3.6%] 3

Cycles

Results (primary 1.8%, secondary 2.2%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
3.2% [1.4%, 10.8%] 10
Regressions ❌
(secondary)
3.5% [1.5%, 5.1%] 9
Improvements ✅
(primary)
-1.7% [-2.6%, -1.0%] 4
Improvements ✅
(secondary)
-3.9% [-5.3%, -2.5%] 2
All ❌✅ (primary) 1.8% [-2.6%, 10.8%] 14

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 461.861s -> 463.58s (0.37%)
Artifact size: 372.23 MiB -> 372.20 MiB (-0.01%)

@rustbot rustbot removed the S-waiting-on-perf Status: Waiting on a perf run to be completed. label Jul 1, 2025
@ywxt
Copy link
Contributor Author

ywxt commented Jul 1, 2025

How about this performance, Is it acceptable?

@SparrowLii
Copy link
Member

SparrowLii commented Jul 1, 2025

@Kobzol Are all the cases in this result run with 4 threads?

Oh I saw it. serde-1.0.219-threads4 has a performance regression of over 10% :(

@SparrowLii
Copy link
Member

SparrowLii commented Jul 1, 2025

How about this performance, Is it acceptable?

The wall-time of some cases has regressed. I am confused since these cases are running under single-thread and should not be affected by thread_pool IMO.

We need to identify the cause. I think you can do more local performance testing under single-thread/mult-ithread (like I listed here)

@Kobzol
Copy link
Member

Kobzol commented Jul 1, 2025

@Kobzol Are all the cases in this result run with 4 threads?

Oh I saw it. serde-1.0.219-threads4 has a performance regression of over 10% :(

Note that we are switching the benchmarking collector to a different machine today, and this result was only the second benchmark run produced on the new machine. So if you want to get more stable results, with noise threshold updated, I would wait a few days.

That being said, we currently only have one multithreaded benchmark in the suite. Local benchmarks would probably be more useful here.

@ywxt
Copy link
Contributor Author

ywxt commented Jul 7, 2025

Some benchmarks have executed on a local machine. Here is the result I summarized.

As a result, there is little impact(below 4%) when the number of threads exceeds 8, but this is probably a significant regression with fewer than 8 threads( the average wall time has increased by 5%, and it’s over 9% for full compilation on 4 threads.).

@SparrowLii
Copy link
Member

SparrowLii commented Jul 7, 2025

Personally I think the results are acceptable. Especially in full and incr-full scenarios (which I believe are the primary contexts where parallel front end demonstrates its value), we can still get a compilation time decreases of 20 to 30+ percent.

My suggestion is that we can merge this PR and mark the original work-stealing as FIXMEs, as one of the potential means to further enhance compilation performance.

@oli-obk
Copy link
Contributor

oli-obk commented Jul 7, 2025

@bors r+

@bors
Copy link
Collaborator

bors commented Jul 7, 2025

📌 Commit 36462f9 has been approved by oli-obk

It is now in the queue for this repository.

@bors bors added S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Jul 7, 2025
@bors
Copy link
Collaborator

bors commented Jul 7, 2025

⌛ Testing commit 36462f9 with merge 25cf7d1...

@bors
Copy link
Collaborator

bors commented Jul 7, 2025

☀️ Test successful - checks-actions
Approved by: oli-obk
Pushing 25cf7d1 to master...

@bors bors added the merged-by-bors This PR was explicitly merged by bors. label Jul 7, 2025
@bors bors merged commit 25cf7d1 into rust-lang:master Jul 7, 2025
11 checks passed
@rustbot rustbot added this to the 1.90.0 milestone Jul 7, 2025
Copy link
Contributor

github-actions bot commented Jul 7, 2025

What is this? This is an experimental post-merge analysis report that shows differences in test outcomes between the merged PR and its parent PR.

Comparing 8df4a58 (parent) -> 25cf7d1 (this PR)

Test differences

Show 49 test diffs

Stage 1

  • broadcast::tests::broadcast_mutual: pass -> ignore (J0)
  • broadcast::tests::broadcast_mutual_sleepy: pass -> ignore (J0)
  • join::tests::join_context_neither: pass -> ignore (J0)
  • scope::tests::fifo_order: pass -> ignore (J0)
  • scope::tests::lifo_order: pass -> ignore (J0)
  • scope::tests::mixed_fifo_lifo_order: pass -> ignore (J0)
  • scope::tests::mixed_fifo_order: pass -> ignore (J0)
  • scope::tests::mixed_lifo_fifo_order: pass -> ignore (J0)
  • scope::tests::mixed_lifo_order: pass -> ignore (J0)
  • scope::tests::nested_fifo_lifo_order: pass -> ignore (J0)
  • scope::tests::nested_fifo_order: pass -> ignore (J0)
  • scope::tests::nested_lifo_fifo_order: pass -> ignore (J0)
  • scope::tests::nested_lifo_order: pass -> ignore (J0)
  • scope::tests::scope_spawn_broadcast_nested: pass -> ignore (J0)
  • spawn::tests::fifo_lifo_order: pass -> ignore (J0)
  • spawn::tests::fifo_order: pass -> ignore (J0)
  • spawn::tests::lifo_fifo_order: pass -> ignore (J0)
  • spawn::tests::lifo_order: pass -> ignore (J0)
  • spawn::tests::mixed_fifo_lifo_order: pass -> ignore (J0)
  • spawn::tests::mixed_lifo_fifo_order: pass -> ignore (J0)
  • thread_pool::tests::mutual_install: pass -> ignore (J0)
  • thread_pool::tests::mutual_install_sleepy: pass -> ignore (J0)
  • thread_pool::tests::nested_fifo_scopes: pass -> ignore (J0)
  • thread_pool::tests::nested_scopes: pass -> ignore (J0)
  • thread_pool::tests::scope_fifo_order: pass -> ignore (J0)
  • thread_pool::tests::scope_lifo_order: pass -> ignore (J0)
  • stack_overflow_crash: pass -> ignore (J1)

Additionally, 22 doctest diffs were found. These are ignored, as they are noisy.

Job group index

Test dashboard

Run

cargo run --manifest-path src/ci/citool/Cargo.toml -- \
    test-dashboard 25cf7d13c960a3ac47d1424ca354077efb6946ff --output-dir test-dashboard

And then open test-dashboard/index.html in your browser to see an overview of all executed tests.

Job duration changes

  1. pr-check-2: 2132.3s -> 2655.7s (24.5%)
  2. pr-check-1: 1460.1s -> 1761.1s (20.6%)
  3. dist-x86_64-apple: 9958.0s -> 8132.4s (-18.3%)
  4. i686-gnu-2: 5460.6s -> 6359.9s (16.5%)
  5. x86_64-gnu-tools: 3220.6s -> 3722.0s (15.6%)
  6. i686-gnu-1: 7222.7s -> 8292.9s (14.8%)
  7. x86_64-apple-2: 4949.3s -> 5658.1s (14.3%)
  8. x86_64-rust-for-linux: 2601.3s -> 2916.4s (12.1%)
  9. i686-gnu-nopt-1: 7460.4s -> 8204.8s (10.0%)
  10. dist-x86_64-illumos: 5576.8s -> 6120.4s (9.7%)
How to interpret the job duration changes?

Job durations can vary a lot, based on the actual runner instance
that executed the job, system noise, invalidated caches, etc. The table above is provided
mostly for t-infra members, for simpler debugging of potential CI slow-downs.

@rust-timer
Copy link
Collaborator

Finished benchmarking commit (25cf7d1): comparison URL.

Overall result: ❌ regressions - no action needed

@rustbot label: -perf-regression

Instruction count

Our most reliable metric. Used to determine the overall result above. However, even this metric can be noisy.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
0.3% [0.3%, 0.3%] 1
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
- - 0
All ❌✅ (primary) - - 0

Max RSS (memory usage)

Results (secondary 0.3%)

A less reliable metric. May be of interest, but not used to determine the overall result above.

mean range count
Regressions ❌
(primary)
- - 0
Regressions ❌
(secondary)
2.6% [2.5%, 2.8%] 2
Improvements ✅
(primary)
- - 0
Improvements ✅
(secondary)
-2.1% [-2.2%, -2.0%] 2
All ❌✅ (primary) - - 0

Cycles

This benchmark run did not return any relevant results for this metric.

Binary size

This benchmark run did not return any relevant results for this metric.

Bootstrap: 466.974s -> 464.368s (-0.56%)
Artifact size: 372.15 MiB -> 372.14 MiB (-0.00%)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merged-by-bors This PR was explicitly merged by bors. S-waiting-on-bors Status: Waiting on bors to run and complete tests. Bors will change the label on completion. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants