-
Notifications
You must be signed in to change notification settings - Fork 661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce tool_use_behavior on agents #203
base: main
Are you sure you want to change the base?
Conversation
c319539
to
b6b280c
Compare
else: | ||
return cast( | ||
ToolsToFinalOutputResult, agent.tool_use_behavior(context_wrapper, tool_results) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a try/except/rethrow around the call to the user-provided function with a useful error message?
Was just playing with this branch and even with this function it was stopping after 1 turn.
def custom_tool_use_behavior(context, results):
return ToolsToFinalOutputResult(is_final_output=False, final_output=None)
... turns out it was a silly mistake on my part where I had forgotten to import ToolsToFinalOutputResults
, but there were no obvious user-visible errors that I could find -- even with the openai.agents
logger set to DEBUG. It just looked as-if the agent had decided it was done even though is_final_result
was False
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh weird! Yeah I'll add that. Thanks
NOTE: This configuration is specific to FunctionTools. Hosted tools, such as file search, | ||
web search, etc are always processed by the LLM. | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This completely supports my use cases. Thanks!
Minor suggestion: In my case an agent's goal is to invoke one particular tool, but needs to use a few others tools to get there. I imagine this may be a fairly common usecase.
If you agree, there's two ways the library could make that usecase even easier to handle:
(1) Similar to the ModelSettings.tool_choice
parameter, allow tool_use_behaviour
to be a str
that matches a tool name. And the behaviour would be "stop after the named tool has been called".
(2) Or, provide a factory function similar to the below:
def stop_after_tool_use(tool: str | FunctionTool) -> ToolsToFinalOutputFunction:
if isinstance(tool, FunctionTool):
tool = tool.name
def custom_tool_use_behavior(context, results):
for result in results:
if tool == result.tool.name:
return ToolsToFinalOutputResult(is_final_output=True, final_output=result.output)
return ToolsToFinalOutputResult(is_final_output=False, final_output=None)
return custom_tool_use_behavior
# Usage example:
@function_tool
def get_foo_id(user_input):
...
@function_tool
def get_bar_id(user_input):
...
@function_tool
def get_large_dataset(foo_id, bar_id):
...
agent = Agent(
name="Foo Agent",
instructions="Using your tools, retrieve the requested dataset.",
tools=[get_foo_id, get_bar_id, get_large_dataset],
tool_use_behavior=stop_after_tool_use(get_large_dataset),
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good suggestion
Context
By default, the outputs of tools are sent to the LLM again. The LLM gets to read the outputs, and produce a new response. There are cases where this is not desired:
tool_choice=required
), then the agent will just infinite loop.This enables you to have different behavior, e.g. use the first tool output as the final output, or write a custom function to process tool results and potentially produce an output.
Test plan
Added new tests and ran existing tests
Also added examples.
Closes #117