Skip to content

Conversation

@juliannagele
Copy link
Member

No description provided.

fhahn and others added 6 commits November 28, 2025 15:45
Add tests for unrolling loops with reductions. In some cases, multiple
parallel reduction phis could be retained to improve performance.

(cherry picked from commit 90f733c)
Add additional tests from llvm#149470.

(cherry picked from commit d10dc67)
…149470)

When partially or runtime unrolling loops with reductions, currently the
reductions are performed in-order in the loop, negating most benefits
from unrolling such loops.

This patch extends unrolling code-gen to keep a parallel reduction phi
per unrolled iteration and combining the final result after the loop.
For out-of-order CPUs, this allows executing mutliple reduction chains
in parallel.

For now, the initial transformation is restricted to cases where we
unroll a small number of iterations (hard-coded to 4, but should maybe
be capped by TTI depending on the execution units), to avoid introducing
an excessive amount of parallel phis.

It also requires single block loops for now, where the unrolled
iterations are known to not exit the loop (either due to runtime
unrolling or partial unrolling). This ensures that the unrolled loop
will have a single basic block, with a single exit block where we can
place the final reduction value computation.

The initial implementation also only supports parallelizing loops with a
single reduction and only integer reductions. Those restrictions are
just to keep the initial implementation simpler, and can easily be
lifted as follow-ups.

With corresponding TTI to the AArch64 unrolling preferences which I will
also share soon, this triggers in ~300 loops across a wide range of
workloads, including LLVM itself, ffmgep, av1aom, sqlite, blender,
brotli, zstd and more.

PR: llvm#149470
(cherry picked from commit 2d9e452)
…llvm#166353)

In combination with llvm#149470
this will introduce parallel accumulators when unrolling reductions with
vector instructions. See also
llvm#166630, which aims to
introduce parallel accumulators for FP reductions.

(cherry picked from commit c73de97)
…ions. (llvm#166630)

This is building on top of
llvm#149470, also introducing
parallel accumulator PHIs when the reduction is for floating points,
provided we have the reassoc flag. See also
llvm#166353, which aims to
introduce parallel accumulators for reductions with vector instructions.

(cherry picked from commit b641509)
@juliannagele juliannagele requested a review from fhahn November 28, 2025 16:02
@juliannagele juliannagele requested a review from a team as a code owner November 28, 2025 16:02
@juliannagele
Copy link
Member Author

@swift-ci please test

@juliannagele
Copy link
Member Author

@swift-ci please test llvm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants