Stop → Continue: resume truncated or aborted assistant turns#49
Open
vahid-ahmadi wants to merge 1 commit intomainfrom
Open
Stop → Continue: resume truncated or aborted assistant turns#49vahid-ahmadi wants to merge 1 commit intomainfrom
vahid-ahmadi wants to merge 1 commit intomainfrom
Conversation
Backend (chatbot.py): - Capture `final.stop_reason` from each Anthropic stream and propagate it on the `done` SSE event. The frontend uses "max_tokens" to detect truncation; "end_turn" / "stop_sequence" mean the model finished cleanly. Frontend (ChatPage.tsx): - Extend Message with `stop_reason` and `stopped` flags. The `done` handler stores `stop_reason`; the AbortError catch (user clicks Stop) sets `stopped: true`. Both flags survive saveConversation/loadConversation via the untyped messages JSON column on the backend. - Render a Continue affordance below any message where `stop_reason === "max_tokens" || stopped`, hidden if a tool is still pending in the message (no orphan tool calls). - New `continueMessage(idx)` posts the conversation up to and including the partial assistant turn back to /chat/message. Anthropic's assistant-prefill behaviour means the model continues the same logical turn — no "Continue from where you stopped" nudge needed. Streamed content appends into the SAME message bubble; cost is summed onto the existing `cost_gbp`. If continue itself truncates or is stopped, the affordance comes back (user-driven, not auto-loop). Acceptance criteria: - max_tokens → Continue button appears. - User clicks Stop mid-stream → partial preserved + Continue appears. - Continue resumes in-place, single bubble, summed cost. - Out of scope: indefinitely auto-continuing — kept user-triggered. Closes #44 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Beta preview is ready.
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
A long policy analysis that hits the 16k `max_tokens` cap, or one the user kills with Stop, currently dies there — they have to start a new chat and re-explain context. This PR adds a Continue affordance below those messages that resumes from exactly where the answer stopped.
Closes #44.
Behaviour
Implementation notes
Test plan
Out of scope