Problem
Provider catalog currently hard-codes known vendors (Anthropic, OpenAI, DashScope, Claude CLI, ACP, Codex). Users running self-hosted or niche OpenAI-compatible gateways (vLLM, LM Studio, LiteLLM,
Ollama proxy, OpenRouter, Together, Groq, Fireworks, local inference) cannot register them without either (a) hijacking the openai type and mismatching defaults, or (b) shipping a new provider_type per
vendor. Neither scales.
Goal
Introduce a neutral custom provider_type that treats any endpoint as OpenAI chat-completions compatible, so ops teams can onboard new backends via config/UI without code changes.
Scope
In scope
- Backend: accept provider_type = "custom" in provider store + HTTP validation.
- Routing: resolve custom through the existing OpenAI-compatible adapter (HTTP + SSE).
- Model listing: expose /v1/models passthrough for custom endpoints.
- UI: add "Custom Provider" entry in provider dropdown with i18n (en/vi/zh).
- Tests: unit + integration contract coverage.
Out of scope
- Non-OpenAI wire formats (Cohere, Bedrock native, Vertex native).
- Per-custom-provider feature flags (vision, tool-use capability negotiation) — deferred.
- UI credential templates per vendor.
Acceptance Criteria
- Creating a provider with provider_type=custom, base_url, encrypted api_key persists and appears in list endpoint.
- Chat completion request routed to custom provider returns streamed SSE response identical in shape to openai type.
- Model listing endpoint returns models reported by the custom backend.
- UI dropdown shows "Custom Provider" in en/vi/zh; selecting it saves provider_type=custom.
- go test ./internal/http/... ./internal/store/... + go test -tags integration ./tests/contracts/http_api/... pass.
- cd ui/web && pnpm test passes.
Problem
Provider catalog currently hard-codes known vendors (Anthropic, OpenAI, DashScope, Claude CLI, ACP, Codex). Users running self-hosted or niche OpenAI-compatible gateways (vLLM, LM Studio, LiteLLM,
Ollama proxy, OpenRouter, Together, Groq, Fireworks, local inference) cannot register them without either (a) hijacking the openai type and mismatching defaults, or (b) shipping a new provider_type per
vendor. Neither scales.
Goal
Introduce a neutral custom provider_type that treats any endpoint as OpenAI chat-completions compatible, so ops teams can onboard new backends via config/UI without code changes.
Scope
In scope
Out of scope
Acceptance Criteria