Skip to content

Conversation

@MasterPtato
Copy link
Contributor

No description provided.

@vercel
Copy link

vercel bot commented Nov 21, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
rivetkit-serverless Ready Ready Preview Comment Nov 22, 2025 2:46am
3 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
rivet-cloud Ignored Ignored Nov 22, 2025 2:46am
rivet-inspector Ignored Ignored Preview Nov 22, 2025 2:46am
rivet-site Ignored Ignored Preview Nov 22, 2025 2:46am

Copy link
Contributor Author

MasterPtato commented Nov 21, 2025

Warning

This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more


How to use the Graphite Merge Queue

Add the label merge-queue to this PR to add it to the merge queue.

You must have a Graphite account in order to use the merge queue. Sign up using this link.

An organization admin has enabled the Graphite Merge Queue in this repository.

Please do not merge from GitHub as this will restart CI on PRs being processed by the merge queue.

This stack of pull requests is managed by Graphite. Learn more about stacking.

@claude
Copy link

claude bot commented Nov 21, 2025

PR Review: fix: parallelize publish signal awaits

Summary

This PR optimizes the publish_signal_inner function by parallelizing two independent database lookups using tokio::try_join!. The change fetches workflow_name_key and wake_signal_key concurrently instead of sequentially.

Code Quality ✅

Good changes:

  • Using tokio::try_join! is the correct approach for parallelizing independent async operations
  • The code is cleaner with the key definitions moved closer together at the top of the function
  • Both database lookups are independent and can safely run in parallel

Correctness ✅

The change is semantically equivalent to the original code:

  1. Both keys are created from inputs available at function start (workflow_id, signal_name)
  2. Neither lookup depends on the result of the other
  3. The wake_signal_entry is still checked in the same conditional logic, just fetched earlier

Performance ✅

This is a valid performance optimization:

  • Before: 2 sequential database round-trips
  • After: 1 parallel database round-trip (both queries execute concurrently)
  • This reduces latency by approximately the time of one database query

Security ✅

No security concerns - the change only affects the ordering/parallelization of existing read operations within the same transaction.

Potential Consideration ⚠️

One minor note: if the workflow doesn't exist (checked at line 155-157), we now also fetch the wake_signal_entry unnecessarily. However, this is a negligible cost since:

  1. Both queries run in parallel anyway
  2. The "workflow not found" case is likely an error/edge case rather than the happy path
  3. The slight overhead is worth the latency improvement on the common path

Test Coverage

Since this is a performance optimization that doesn't change behavior, existing tests should provide adequate coverage. The function's contract remains unchanged.

Verdict: LGTM

Clean, correct performance optimization. Approve.

@pkg-pr-new
Copy link

pkg-pr-new bot commented Nov 21, 2025

More templates

@rivetkit/cloudflare-workers

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/cloudflare-workers@3523

@rivetkit/db

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/db@3523

@rivetkit/framework-base

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/framework-base@3523

@rivetkit/next-js

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/next-js@3523

@rivetkit/react

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/react@3523

rivetkit

pnpm add https://pkg.pr.new/rivet-dev/rivet/rivetkit@3523

@rivetkit/sql-loader

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/sql-loader@3523

@rivetkit/engine-runner

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/engine-runner@3523

@rivetkit/engine-runner-protocol

pnpm add https://pkg.pr.new/rivet-dev/rivet/@rivetkit/engine-runner-protocol@3523

commit: d6cccca

@MasterPtato MasterPtato changed the base branch from main to graphite-base/3523 November 22, 2025 02:45
@MasterPtato MasterPtato force-pushed the 11-18-fix_parallelize_publish_signal_awaits branch from cda5885 to d6cccca Compare November 22, 2025 02:45
@MasterPtato MasterPtato changed the base branch from graphite-base/3523 to 11-21-chore_fix_diff_between_ee_fix_plural_api_query_params November 22, 2025 02:45
@claude
Copy link

claude bot commented Nov 22, 2025

Code Review: fix: parallelize publish signal awaits

Summary

This PR optimizes publish_signal_inner by parallelizing two independent database reads using tokio::try_join!. Previously, these reads were sequential.

✅ What looks good

  1. Correct parallelization pattern: Using tokio::try_join! is the right approach for running independent async operations concurrently. Both tx.get() calls are independent reads that can safely execute in parallel.

  2. No semantic changes: The logic remains identical - the workflow existence check and wake signal check are performed with the same data, just fetched more efficiently.

  3. Clean code organization: Moving the wake_signal_key declaration up to co-locate it with its fetch is a sensible refactor that improves readability.

  4. Net reduction in code: The change reduces lines (-12, +9) while improving performance, which is ideal.

🔍 Review notes

  1. Transaction isolation level: Both fetches use Serializable isolation, which is consistent with the original code. This ensures the parallel reads maintain the same consistency guarantees.

  2. Error handling: try_join! will short-circuit and return early if either operation fails, which matches the original sequential behavior.

  3. Variable naming: The shadowing of workflow_name_entry (from the tuple result to the Some binding) is fine and follows existing patterns in the codebase.

💭 Minor suggestion

Consider adding a brief comment above the try_join! explaining why these reads are parallelized:

// Fetch workflow name and wake signal status in parallel since they're independent reads
let (workflow_name_entry, wake_signal_entry) = tokio::try_join!(
    tx.get(&self.subspace.pack(&workflow_name_key), Serializable),
    tx.get(&self.subspace.pack(&wake_signal_key), Serializable),
)?;

This is optional since the intent is fairly clear from context.

✅ Verdict

LGTM - This is a clean optimization that improves performance without changing behavior. The parallel reads should reduce latency for the publish_signal_inner operation.

@claude
Copy link

claude bot commented Nov 22, 2025

PR Review: fix: parallelize publish signal awaits

Summary

This PR parallelizes two sequential database reads in publish_signal_inner using tokio::try_join!, reducing latency by executing both fetches concurrently.

Code Quality ✅

  • Good use of tokio::try_join!: The change correctly uses try_join! to parallelize two independent database fetches that were previously sequential
  • Clean refactoring: The wake_signal_key declaration is appropriately moved earlier to enable the parallel fetch
  • No behavioral change: The logic remains identical - both values are still checked at the same points, just fetched earlier

Correctness ✅

The change is correct:

  1. Both workflow_name_key and wake_signal_key fetches are independent operations with no data dependencies between them
  2. The wake_signal_entry is only used later in the function (line 208) to conditionally wake the workflow
  3. Error handling via ? on try_join! properly propagates any failures from either operation

Performance ✅

This is a net positive performance improvement:

  • Before: 2 sequential awaits = sum of both latencies
  • After: 1 parallel await = max of both latencies

For database operations, this can provide meaningful latency reduction, especially under load.

Potential Considerations

  1. Transaction isolation: Both reads use Serializable isolation level within the same transaction (tx), which should maintain consistency. This looks correct as written.
  2. Early fetch of wake_signal_entry: The signal wake entry is now fetched even when the workflow doesn't exist (early return at line 156). This is a minor inefficiency in the error case, but the happy path improvement outweighs this edge case cost.

Test Coverage

No new tests added, but this is a pure refactoring that doesn't change behavior. Existing tests should cover this code path.

Verdict: Approve

This is a clean, focused optimization that correctly parallelizes independent async operations. The code is readable and maintains the same behavior while reducing latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants