Skip to content

Conversation

@madisoncarter1234
Copy link
Contributor

Addresses #282

Summary

  • Parallelizes ECDSA sender recovery using rayon
  • Adds sender_recovery_duration metric
  • All tests pass

What's included

✅ Parallel sender recovery (matches upstream reth approach)
✅ Timing metric for observability
✅ Proper error handling preserved

What's not included

⚠️ Benchmarks (infra was removed from main - can be added in follow-up)

@cb-heimdall
Copy link
Collaborator

cb-heimdall commented Dec 31, 2025

✅ Heimdall Review Status

Requirement Status More Info
Reviews 1/1
Denominator calculation
Show calculation
1 if user is bot 0
1 if user is external 0
2 if repo is sensitive 0
From .codeflow.yml 1
Additional review requirements
Show calculation
Max 0
0
From CODEOWNERS 0
Global minimum 0
Max 1
1
1 if commit is unverified 1
Sum 2

@madisoncarter1234 madisoncarter1234 force-pushed the fix/parallel-sender-recovery branch from a2763d7 to e839384 Compare January 1, 2026 16:57
  Addresses base#282 - reduces sender recovery overhead during flashblock processing.

  ## Changes
  - Add rayon for parallel iteration over transactions
  - Batch ECDSA sender recovery upfront using par_iter()
  - Add sender_recovery_duration metric for observability
  - Preserve cache lookup behavior (check prev_pending_blocks first)
  - Maintain original error handling semantics

  ## Technical Approach
  Follows upstream reth pattern (paradigmxyz/reth#20169) - recover all
  senders in parallel before the sequential transaction processing loop.

  ## Testing
  All 10 existing flashblocks tests pass.

  ## Note
  Benchmarks not included as benchmark infrastructure was removed from main.
  The new sender_recovery_duration metric can be used for production measurement.
@madisoncarter1234 madisoncarter1234 force-pushed the fix/parallel-sender-recovery branch from e839384 to 41f678f Compare January 2, 2026 23:56
Comment on lines 420 to 423
// Sender already recovered in parallel phase
let sender = *sender_by_hash
.get(&tx_hash)
.expect("sender must exist from parallel recovery");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid the .expect statement here, we could just iterate over the txs in parallel as you've done earlier, but instead of constructing a map, produce a list of tuples that is (Transaction, Sender) so the sender is available in this for loop.

Copy link
Contributor

@refcell refcell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

@madisoncarter1234
Copy link
Contributor Author

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

Switched from HashMap lookup to Vec<(&OpTxEnvelope, Address)> so the sender is directly available in the loop, no .expect() needed.

Re: benchmarking, i can take a stab at it in this PR!

@refcell
Copy link
Contributor

refcell commented Jan 6, 2026

This looks great, but it looks like Haardik wanted to first get data and test performance before implementing parallel sender recovery in #282. Maybe we could do that in this PR or a follow-on?

Switched from HashMap lookup to Vec<(&OpTxEnvelope, Address)> so the sender is directly available in the loop, no .expect() needed.

Re: benchmarking, i can take a stab at it in this PR!

Awesome work thank you @madisoncarter1234 !!! And yes if you could add benchmarks here that would be very helpful.

@madisoncarter1234
Copy link
Contributor Author

Awesome work thank you @madisoncarter1234 !!! And yes if you could add benchmarks here that would be very helpful.

Added sender recovery benchmark (crates/flashblocks/benches/sender_recovery.rs). Results on my machine below

Transactions Sequential Parallel Speedup
30 txs 649 µs 185 µs 3.5x
40 txs 919 µs 241 µs 3.8x
100 txs 2.35 ms 437 µs 5.4x

parallel recovery scales well, larger batches showing larger gains

@haardikk21
Copy link
Collaborator

haardikk21 commented Jan 7, 2026

Awesome work thank you @madisoncarter1234 !!! And yes if you could add benchmarks here that would be very helpful.

Added sender recovery benchmark (crates/flashblocks/benches/sender_recovery.rs). Results on my machine below

Transactions Sequential Parallel Speedup
30 txs 649 µs 185 µs 3.5x
40 txs 919 µs 241 µs 3.8x
100 txs 2.35 ms 437 µs 5.4x
parallel recovery scales well, larger batches showing larger gains

Thank you, this is great. I added one more bench based off your work that also tests against a set of real transactions across 5 blocks from Base mainnet to introduce some variety. LGTM

Note for @refcell: The large diff in the PR is now due to the base_mainnet_blocks.json file being almost 100k lines long, but that's just 5 blocks worth of raw transaction data from mainnet. The new bench I added is in the same sender_recovery.rs file which is the only code addition

@cb-heimdall
Copy link
Collaborator

Review Error for haardikk21 @ 2026-01-07 06:41:57 UTC
User cannot review their own commit

@haardikk21 haardikk21 requested a review from refcell January 7, 2026 06:42
@refcell refcell merged commit faf08d8 into base:main Jan 7, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants