-
Notifications
You must be signed in to change notification settings - Fork 2.8k
fix: support tool_choice with specific tool names in LiteLLM streaming (fixes #1846) #1929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
2f85765
8abed69
fca3ed5
1d20846
ae1ff2d
d9aa6da
3c5690a
a0ba8a2
150db69
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -24,6 +24,7 @@ | |||||||||||||||||||||||||||||
| ChatCompletionMessageCustomToolCall, | ||||||||||||||||||||||||||||||
| ChatCompletionMessageFunctionToolCall, | ||||||||||||||||||||||||||||||
| ChatCompletionMessageParam, | ||||||||||||||||||||||||||||||
| ChatCompletionNamedToolChoiceParam, | ||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||
| from openai.types.chat.chat_completion_message import ( | ||||||||||||||||||||||||||||||
| Annotation, | ||||||||||||||||||||||||||||||
|
|
@@ -32,6 +33,7 @@ | |||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||
| from openai.types.chat.chat_completion_message_function_tool_call import Function | ||||||||||||||||||||||||||||||
| from openai.types.responses import Response | ||||||||||||||||||||||||||||||
| from openai.types.responses.tool_choice_function import ToolChoiceFunction | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| from ... import _debug | ||||||||||||||||||||||||||||||
| from ...agent_output import AgentOutputSchemaBase | ||||||||||||||||||||||||||||||
|
|
@@ -367,15 +369,39 @@ async def _fetch_response( | |||||||||||||||||||||||||||||
| if isinstance(ret, litellm.types.utils.ModelResponse): | ||||||||||||||||||||||||||||||
| return ret | ||||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||||
| # Convert tool_choice to the correct type for Response | ||||||||||||||||||||||||||||||
| # tool_choice can be a Literal, a ChatCompletionNamedToolChoiceParam, or omit | ||||||||||||||||||||||||||||||
| response_tool_choice: Literal["auto", "required", "none"] | ToolChoiceFunction | ||||||||||||||||||||||||||||||
| if tool_choice is omit: | ||||||||||||||||||||||||||||||
| response_tool_choice = "auto" | ||||||||||||||||||||||||||||||
| elif isinstance(tool_choice, dict): | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
| # Convert from ChatCompletionNamedToolChoiceParam to ToolChoiceFunction | ||||||||||||||||||||||||||||||
| # The dict has structure: {"type": "function", "function": {"name": "tool_name"}} | ||||||||||||||||||||||||||||||
| func_data = tool_choice.get("function") | ||||||||||||||||||||||||||||||
| if ( | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
| tool_choice.get("type") == "function" | ||||||||||||||||||||||||||||||
| and func_data is not None | ||||||||||||||||||||||||||||||
| and isinstance(func_data, dict) | ||||||||||||||||||||||||||||||
| ): | ||||||||||||||||||||||||||||||
| response_tool_choice = ToolChoiceFunction( | ||||||||||||||||||||||||||||||
| type="function", name=func_data["name"] | ||||||||||||||||||||||||||||||
| ) | ||||||||||||||||||||||||||||||
| else: | ||||||||||||||||||||||||||||||
| # Fallback to auto if unexpected format | ||||||||||||||||||||||||||||||
| response_tool_choice = "auto" | ||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||
| response_tool_choice = ToolChoiceFunction( | |
| type="function", name=func_data["name"] | |
| ) | |
| else: | |
| # Fallback to auto if unexpected format | |
| response_tool_choice = "auto" | |
| name = func_data.get("name") | |
| if isinstance(name, str) and name: | |
| response_tool_choice = ToolChoiceFunction( | |
| type="function", name=name | |
| ) | |
| else: | |
| # Fallback to auto if name is missing or invalid | |
| response_tool_choice = "auto" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Fixed in commit fca3ed5. Now using func_data.get("name") with validation isinstance(tool_name, str) and tool_name before constructing ToolChoiceFunction (lines 388-393).
Outdated
Copilot
AI
Oct 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace the type: ignore with an explicit cast for better type safety and readability. Example: response_tool_choice = cast(Literal["auto", "required", "none"], tool_choice).
| response_tool_choice = tool_choice # type: ignore | |
| response_tool_choice = cast(Literal["auto", "required", "none"], tool_choice) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Fixed in commit fca3ed5. Now using explicit cast at lines 398-400.
Outdated
Copilot
AI
Oct 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add an explicit branch to accept a pre-built ToolChoiceFunction instance to avoid silently falling back to "auto" if tool_choice is already a ToolChoiceFunction. Example insertion between the dict and literal checks: elif isinstance(tool_choice, ToolChoiceFunction): response_tool_choice = tool_choice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Fixed in commit fca3ed5. Added explicit ToolChoiceFunction branch at lines 376-378.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this, and it’s still not fixed, response_tool_choice always ends up being "auto", even when I pass:ModelSettings(tool_choice="my_tool")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks your test, I will test it again later!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ Fixed in commit fca3ed5 and verified with integration testing.
Root cause: The initial fix incorrectly assumed LiteLLM uses openai_responses.Converter (flat format), but it actually uses chatcmpl_converter.Converter which returns nested ChatCompletions format.
The fix: Now correctly handles the nested dict structure {"type": "function", "function": {"name": "my_tool"}} by accessing tool_choice.get("function").get("name") (lines 382-393).
Verification: Integration test confirms that when ModelSettings(tool_choice="my_specific_tool") is passed, litellm.acompletion receives the correct nested dict format, and Response.tool_choice is properly set to ToolChoiceFunction(name="my_specific_tool").
Test output:
litellm.acompletion called with tool_choice: {'type': 'function', 'function': {'name': 'my_specific_tool'}}
The fix is now working correctly!
Uh oh!
There was an error while loading. Please reload this page.