custom openai responses API support#1191
custom openai responses API support#1191AlexanderLuck wants to merge 4 commits intocarlrobertoh:masterfrom
Conversation
|
@AlexanderLuck is attempting to deploy a commit to the carlrobertohgmailcom's projects Team on Vercel. A member of the Team first needs to authorize it. |
|
attached second commit that fixes copying working custom openai configurations not properly handling existing API key bug. |
… preview. fixed issues with 2.5 models
|
3rd commit is cleanup of google/gemini models. 3.0 is being deprecated as per https://ai.google.dev/gemini-api/docs/gemini-3 in 2 weeks. generally had issues also. Tested all 4 models available and working now in the chat with the new commit |
… (@includeopenfiles) appear immediately while file/folder/git scanning runs in parallel in the background. Also passes actual search text to group lookups instead of empty string, adds early termination (200 file cap) to prevent full project tree scans, and deduplicates merged results.
|
4th commit is pimping the performance/reactivity of the search for @ commands. |
|
Just curious, will it will ordinary chant completions? Or is it second protocol support, so that both become available? |
|
Not sure I perfectly understand what you mean but the new API is implemented as a new path. It checks the input URL for what OpenAI endpoint schema is used and for the new responses API than uses new parallel code to parse the new API pattern |
|
So if I read you correctly means both becomes available, yes. Currently just the old OpenAI format is supported |
Yes, that is what I meant. Thanks. Hopefully it gets merged then |
|
Many thanks for contributing and sorry for the late response. Unfortunately, the timing of this PR is bad, as I'm in the middle of removing llm-client entirely in favor of Koog. I'll be pushing a bunch of new changes shortly, which will go conflict with your One question tho: if the plugin's native OpenAI provider supports the Responses API, why would you need to configure it via Custom OpenAI? One reason could be faster access to newer models, but other than that I don't see a very compelling reason, since the Responses API isn't widely supported by 3rd party inference providers. |
Even I am not an author of that commit, this question is sensible in fact. Responses api is better for agent. And native OpenAI is insufficient because aside of older models, it's also impossible to override anything like reasoning effort etc, it's complete black box without an option to chose anything. At minimum: reasoning effort, verbosity, service tier, if to allow native OpenAI web search etc. Having in settings just option to enter API key and hope it works is no go, that's why I have all Open AI models over custom open ai, manually cloning them into various reasoning efforts, service tiers and other things. Also repsonses api is much more token caching efficient |
|
Yes, it needs to support the both APIs. I'd be happy to take a look at the current state of the koog implementation and work on the custom parameterization options if you want the support. |
I wonder could you patch latest new version based on koog? For myself at least, reponses api much needed |
OpenAI supports new models, especially codex, only via new responses API.
Added support for responses API, tried to stay close to plugins patterns.
test by making new custom gpt configuration for example body:
{
"stream" : true,
"max_output_tokens" : 8192,
"model" : "gpt-5.3-codex",
"messages" : "$OPENAI_MESSAGES",
"temperature" : 1
}
using the
https://api.openai.com/v1/responses
url