Skip to content
32 changes: 29 additions & 3 deletions src/agents/extensions/models/litellm_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
ChatCompletionMessageCustomToolCall,
ChatCompletionMessageFunctionToolCall,
ChatCompletionMessageParam,
ChatCompletionNamedToolChoiceParam,
)
from openai.types.chat.chat_completion_message import (
Annotation,
Expand All @@ -32,6 +33,7 @@
)
from openai.types.chat.chat_completion_message_function_tool_call import Function
from openai.types.responses import Response
from openai.types.responses.tool_choice_function import ToolChoiceFunction

from ... import _debug
from ...agent_output import AgentOutputSchemaBase
Expand Down Expand Up @@ -367,15 +369,39 @@ async def _fetch_response(
if isinstance(ret, litellm.types.utils.ModelResponse):
return ret

# Convert tool_choice to the correct type for Response
# tool_choice can be a Literal, a ChatCompletionNamedToolChoiceParam, or omit
response_tool_choice: Literal["auto", "required", "none"] | ToolChoiceFunction
if tool_choice is omit:
response_tool_choice = "auto"
elif isinstance(tool_choice, dict):
Copy link

Copilot AI Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an explicit branch to accept a pre-built ToolChoiceFunction instance to avoid silently falling back to "auto" if tool_choice is already a ToolChoiceFunction. Example insertion between the dict and literal checks: elif isinstance(tool_choice, ToolChoiceFunction): response_tool_choice = tool_choice.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5. Added explicit ToolChoiceFunction branch at lines 376-378 to handle pre-built instances.

# Convert from ChatCompletionNamedToolChoiceParam to ToolChoiceFunction
# The dict has structure: {"type": "function", "function": {"name": "tool_name"}}
func_data = tool_choice.get("function")
if (

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Tool choice dict never converted to ToolChoiceFunction

The new conversion logic assumes a Chat Completions style payload ({"type": "function", "function": {"name": ...}}), but Converter.convert_tool_choice already returns the Responses format {"type": "function", "name": ...}. Because the expected function sub-dict is never present, the ToolChoiceFunction branch is never executed and every dict-valued tool_choice falls through to the 'auto' fallback. Streaming now avoids the validation error but the returned Response.tool_choice always reports 'auto', so callers still cannot see the specific tool that was forced. Consider reading the name key directly when tool_choice is a function dict.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5.

The root cause was identified: LiteLLM uses chatcmpl_converter.Converter (not openai_responses.Converter), which returns the nested ChatCompletions format: {"type": "function", "function": {"name": "..."}}.

The fix now correctly extracts the name from the nested structure at lines 382-393:

func_data = tool_choice.get("function")
if (...):
    tool_name = func_data.get("name")
    if isinstance(tool_name, str) and tool_name:
        response_tool_choice = ToolChoiceFunction(type="function", name=tool_name)

Integration testing confirms tool_choice is now correctly converted and passed to litellm.acompletion.

tool_choice.get("type") == "function"
and func_data is not None
and isinstance(func_data, dict)
):
response_tool_choice = ToolChoiceFunction(
type="function", name=func_data["name"]
)
else:
# Fallback to auto if unexpected format
response_tool_choice = "auto"
Copy link

Copilot AI Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accessing func_data["name"] without verifying the key exists can raise a KeyError if the dict is malformed. Consider a defensive check and fallback to "auto" (or raise a clear ValueError) when name is missing or not a non-empty string. For example, extract name via name = func_data.get("name") and validate isinstance(name, str) and name before constructing ToolChoiceFunction.

Suggested change
response_tool_choice = ToolChoiceFunction(
type="function", name=func_data["name"]
)
else:
# Fallback to auto if unexpected format
response_tool_choice = "auto"
name = func_data.get("name")
if isinstance(name, str) and name:
response_tool_choice = ToolChoiceFunction(
type="function", name=name
)
else:
# Fallback to auto if name is missing or invalid
response_tool_choice = "auto"

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5. Now using func_data.get("name") with validation isinstance(tool_name, str) and tool_name before constructing ToolChoiceFunction (lines 388-393).

elif tool_choice in ("auto", "required", "none"):
response_tool_choice = tool_choice # type: ignore
Copy link

Copilot AI Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace the type: ignore with an explicit cast for better type safety and readability. Example: response_tool_choice = cast(Literal["auto", "required", "none"], tool_choice).

Suggested change
response_tool_choice = tool_choice # type: ignore
response_tool_choice = cast(Literal["auto", "required", "none"], tool_choice)

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5. Now using explicit cast at lines 398-400.

else:
# Fallback to auto for any other case
response_tool_choice = "auto"
Copy link

Copilot AI Oct 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add an explicit branch to accept a pre-built ToolChoiceFunction instance to avoid silently falling back to "auto" if tool_choice is already a ToolChoiceFunction. Example insertion between the dict and literal checks: elif isinstance(tool_choice, ToolChoiceFunction): response_tool_choice = tool_choice.

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5. Added explicit ToolChoiceFunction branch at lines 376-378.


response = Response(
id=FAKE_RESPONSES_ID,
created_at=time.time(),
model=self.model,
object="response",
output=[],
tool_choice=cast(Literal["auto", "required", "none"], tool_choice)
if tool_choice is not omit
else "auto",
tool_choice=response_tool_choice,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested this, and it’s still not fixed, response_tool_choice always ends up being "auto", even when I pass:ModelSettings(tool_choice="my_tool")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks your test, I will test it again later!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Fixed in commit fca3ed5 and verified with integration testing.

Root cause: The initial fix incorrectly assumed LiteLLM uses openai_responses.Converter (flat format), but it actually uses chatcmpl_converter.Converter which returns nested ChatCompletions format.

The fix: Now correctly handles the nested dict structure {"type": "function", "function": {"name": "my_tool"}} by accessing tool_choice.get("function").get("name") (lines 382-393).

Verification: Integration test confirms that when ModelSettings(tool_choice="my_specific_tool") is passed, litellm.acompletion receives the correct nested dict format, and Response.tool_choice is properly set to ToolChoiceFunction(name="my_specific_tool").

Test output:

litellm.acompletion called with tool_choice: {'type': 'function', 'function': {'name': 'my_specific_tool'}}

The fix is now working correctly!

top_p=model_settings.top_p,
temperature=model_settings.temperature,
tools=[],
Expand Down