Fix: use chat.message hook for context injection#3
Open
niklas-r wants to merge 1 commit intomark-hingston:mainfrom
Open
Fix: use chat.message hook for context injection#3niklas-r wants to merge 1 commit intomark-hingston:mainfrom
niklas-r wants to merge 1 commit intomark-hingston:mainfrom
Conversation
chat.params receives (input, output) where output only exposes
{temperature, topP, options} — no system prompt or message text.
The plugin was treating the first arg as a wrapper object and
accessing params.input.message.text, which was always undefined,
causing context injection to silently bail on every message.
Switch to chat.message hook which provides output.message.system
(the system prompt) and output.parts (user message TextParts),
matching the OpenCode plugin API contract.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
ELF's context injection (golden rules, heuristics, past learnings into the system prompt) is not working against the current OpenCode (v1.2.6) plugin API.
Two issues with the current
chat.paramsusage:(input, output)— two separate arguments. The plugin usesasync (params)and accessesparams.input.message.text. Sinceparamsis theinputobject,params.inputresolves toundefined, hitting theif (!userMessage) returnbail-out on every call.chat.paramsonly exposes{ temperature, topP, options }in itsoutput— there is nosystemPromptor message text to modify. Per the current@opencode-ai/plugintype definitions, this hook is for tweaking LLM parameters, not injecting content.Symptoms:
elf metricsreturns[]— no injection metrics recordedhitCount: 0[ELF MEMORY]block visible in the agent's system promptFix
Switch from
chat.paramstochat.message, which provides:output.message.system— the mutable system prompt (UserMessage.system)output.parts— the user's message asPart[](filtered forTextPart)Changes
src/index.ts— Replacechat.paramswithchat.message, fix signature to(input, output), read user text fromoutput.parts, inject intooutput.message.systempackage.json— Update declared hook fromchat.paramstochat.messageREADME.md— Update architecture diagram to reflectchat.messageVerified
After this fix, the
[ELF MEMORY]block is correctly injected into the system prompt on every message, containing golden rules and any matched heuristics: