feat: add forceReleaseLock + onLockConflict for interrupt/steerability#193
Merged
haydenbleasel merged 17 commits intovercel:mainfrom Mar 8, 2026
Merged
feat: add forceReleaseLock + onLockConflict for interrupt/steerability#193haydenbleasel merged 17 commits intovercel:mainfrom
haydenbleasel merged 17 commits intovercel:mainfrom
Conversation
Amp-Thread-ID: https://ampcode.com/threads/T-019cc675-20e8-73db-b852-5690bafe0008 Co-authored-by: Amp <amp@ampcode.com>
Contributor
|
@gakonst is attempting to deploy a commit to the Vercel Labs Team on Vercel. A member of the Team first needs to authorize it. |
Implements the new StateAdapter.forceReleaseLock method in the mock adapter so tests using createMockState() don't break. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Allow the onLockConflict callback to return a Promise, enabling users to check external state (e.g. DB queries) before deciding whether to force-release or drop. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Covers: default drop behavior, force mode, sync/async callbacks returning force/drop, and forceReleaseLock on memory adapter. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…dapter Biome enforces alphabetical ordering on interface members. Also adds the missing forceReleaseLock implementation to the ioredis state adapter. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Contributor
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Member
|
great stuff @gakonst 🙏 |
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
When building AI agents on top of the chat SDK, users need to be able to interrupt a running agent by sending a follow-up message. Currently, if a handler is still processing (e.g., streaming an LLM response), the thread lock blocks the new message and it gets silently dropped via
LockError.This makes it impossible to steer or interrupt long-running agent turns from chat.
Solution
Two changes:
StateAdapter.forceReleaseLock(threadId)— unconditionally releases a thread lock regardless of ownership token. The previous lock holder'sfinallyblock will callreleaseLock()with a stale token, which is already a no-op (both Redis and memory impls verify the token before deleting).onLockConflictconfig option — controls behavior whenacquireLockfails:'drop'(default, preserves current behavior) — throwLockError'force'— force-release the existing lock and re-acquire(threadId, message) => 'force' | 'drop'— callback for custom logicWhy this matters
Any chat SDK user building long-running handlers (AI agents, workflows, async tasks) needs this. The current lock-or-drop behavior was designed for short idempotent handlers, not multi-minute streaming sessions. This is a non-breaking, backwards-compatible addition.