-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
Currently the parser relies on pattern matching and heuristics. Add support for multiple LLM providers as the NLP backbone so teams can choose based on cost, latency, or preference.
Providers to support:
anthropic— Claude API (claude-haiku for speed, claude-sonnet for accuracy)openai— GPT-4o / GPT-4o-minigoogle— Gemini 1.5 Flashmistral— Mistral Small (good for cost-sensitive deployments)cohere— Command R (strong on structured extraction tasks)
Proposed interface:
const parser = new IntentParser({
provider: "anthropic", // swap to "openai" | "google" | "mistral" | "cohere"
apiKey: process.env.API_KEY,
model: "claude-haiku-4-5" // optional override
})
What this unlocks:
- Teams not on Anthropic can still use intent-parser
- Benchmark accuracy and latency across providers per corridor
- Fallback to a secondary provider if primary is down
- Cost optimization — route simple intents to cheaper models
Suggested approach:
- Abstract LLM calls behind a
ProviderAdapterinterface - Each provider gets its own adapter —
AnthropicAdapter,OpenAIAdapteretc. - Keep regex heuristics as the fast path, LLM as the enrichment layer
- Add a
parsed_byfield in response showing which provider and model was used
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels