Skip to content

fix(llm): strip markdown fences from LLM response before instructor parsing (#159)#162

Open
relunctance wants to merge 1 commit intoFlowElement-ai:mainfrom
relunctance:fix/llm-markdown-json-strip
Open

fix(llm): strip markdown fences from LLM response before instructor parsing (#159)#162
relunctance wants to merge 1 commit intoFlowElement-ai:mainfrom
relunctance:fix/llm-markdown-json-strip

Conversation

@relunctance
Copy link
Copy Markdown

Fix: LLM JSON parsing fails when response has markdown fences (#159)

Problem: Entity Name Extraction fails because qwen3 (and other reasoning models) wrap JSON responses in json ... fences. instructor's pydantic validation rejects the raw markdown-wrapped content, causing infinite retry loops with exponential backoff.

Error:

Root cause: instructor v1.x parses the raw message.content string directly without stripping markdown fences.

Solution: Wrap the instructor async client in _MarkdownStrippingClient, a thin proxy that:

  1. Calls the original instructor.aclient.chat.completions.create()
  2. Strips json ... fences from response.choices[0].message.content
  3. Strips invisible control characters (0x00-0x1f) that can also break JSON parsing
  4. Returns the cleaned response to instructor's response_model parser

Applies to all OpenAI-compatible backends (qwen3, gpt-4, etc.) via the unified OpenAIAdapter.

Closes #159

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

P0: LLM JSON 解析失败 — markdown 代码块导致 instructor pydantic validation 无限重试

1 participant