-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
[BUG] Responses : add structured output for sdk #14206
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Responses : add structured output for sdk #14206
Conversation
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Can this PR be reviewed and merged on priority? |
We are trying to use this in responses sdk can this be merged on priority |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
move mock test inside test_litellm/
so it runs through the github action
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
moved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have removed the code that is causing tests to fail. It was functionality i added in other PR, but was removed later, I suppose its test case remained. I have removed it. After merge of this PR, that should be fixed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is still failing due to some error that is out of scope of this PR
Fix text_format parameter handling in litellm.responses() for structured output
Relevant issues
Fixes #13789
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/
directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit
testing
Type
🐛 Bug Fix
🧹 Refactoring
Changes
The issue was that the response_format parameter was not defined for SDK-related requests, hence even after specifying it in the request, the response was not a structured output. This used to work fine for Litellm proxy because when openai client is used, it sends the "text" param to Litellm, and then Litellm handles it correctly. But when for SDK implementation, OpenAI Python SDK uses "response_format" which is later converted to "text" param(this was not in litellm hence it used to fail) before sending to the api call. I have added that and now both sdk and proxy are able to accept and give structured output.