fix: stream thinking output immediately#145
Conversation
Triage resultQuick read
IntentThis PR is trying to make text-mode CLI output show agent thinking as soon as each WhyThe underlying problem is real, but this fix is too localized. It removes the delay by changing the rendered text behavior so each transport chunk becomes its own Codex reviewNot run in this read-only triage step. Local diff inspection indicates the fix changes the output contract in the wrong place:
CI/CDNot run in this step. This was a read-only triage pass only; no installs, tests, CI checks, or review automation were started here. RecommendationClose this PR. A right-shaped fix should make thinking output visible promptly without making the text formatter's public output depend on how the upstream agent happens to split |
|
Closed by automated triage. |
Summary
This change makes text-mode thinking output render as soon as each
agent_thought_chunkarrives, instead of buffering until the next non-thought update.Problem
The text formatter currently accumulates thought chunks in
thoughtBufferand only flushes them when a later non-thought update or prompt result arrives. In practice this means thinking output appears blocked, even when the agent is streaming it incrementally.Changes
agent_thought_chunkimmediately in text modeWhy this should be separate
This PR only changes the output timing. It does not change thought formatting or whitespace handling.
Testing
./node_modules/.bin/tsc -p tsconfig.test.jsonnode --test dist-test/test/output.test.js