Skip to content

Commit fb9573b

Browse files
authored
Merge branch 'main' into br_tool_hooks
2 parents 65329a9 + 2261aab commit fb9573b

File tree

126 files changed

+7694
-700
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

126 files changed

+7694
-700
lines changed

.github/workflows/issues.yml

+6-3
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,10 @@ jobs:
1717
stale-issue-label: "stale"
1818
stale-issue-message: "This issue is stale because it has been open for 7 days with no activity."
1919
close-issue-message: "This issue was closed because it has been inactive for 3 days since being marked as stale."
20-
days-before-pr-stale: -1
21-
days-before-pr-close: -1
22-
any-of-labels: 'question,needs-more-info'
20+
any-of-issue-labels: 'question,needs-more-info'
21+
days-before-pr-stale: 10
22+
days-before-pr-close: 7
23+
stale-pr-label: "stale"
24+
stale-pr-message: "This PR is stale because it has been open for 10 days with no activity."
25+
close-pr-message: "This PR was closed because it has been inactive for 7 days since being marked as stale."
2326
repo-token: ${{ secrets.GITHUB_TOKEN }}

.github/workflows/tests.yml

+3
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@ on:
88
branches:
99
- main
1010

11+
env:
12+
UV_FROZEN: "1"
13+
1114
jobs:
1215
lint:
1316
runs-on: ubuntu-latest

.gitignore

+2-2
Original file line numberDiff line numberDiff line change
@@ -135,10 +135,10 @@ dmypy.json
135135
cython_debug/
136136

137137
# PyCharm
138-
#.idea/
138+
.idea/
139139

140140
# Ruff stuff:
141141
.ruff_cache/
142142

143143
# PyPI configuration file
144-
.pypirc
144+
.pypirc

Makefile

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ sync:
55
.PHONY: format
66
format:
77
uv run ruff format
8+
uv run ruff check --fix
89

910
.PHONY: lint
1011
lint:
@@ -36,7 +37,6 @@ snapshots-create:
3637
.PHONY: old_version_tests
3738
old_version_tests:
3839
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m pytest
39-
UV_PROJECT_ENVIRONMENT=.venv_39 uv run --python 3.9 -m mypy .
4040

4141
.PHONY: build-docs
4242
build-docs:

README.md

+2
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ source env/bin/activate
3030
pip install openai-agents
3131
```
3232

33+
For voice support, install with the optional `voice` group: `pip install 'openai-agents[voice]'`.
34+
3335
## Hello world example
3436

3537
```python

docs/agents.md

+3-1
Original file line numberDiff line numberDiff line change
@@ -142,4 +142,6 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
142142

143143
!!! note
144144

145-
If requiring tool use, you should consider setting [`Agent.tool_use_behavior`] to stop the Agent from running when a tool output is produced. Otherwise, the Agent might run in an infinite loop, where the LLM produces a tool call , and the tool result is sent to the LLM, and this infinite loops because the LLM is always forced to use a tool.
145+
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
146+
147+
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.

docs/assets/images/graph.png

92.8 KB
Loading

docs/context.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -41,14 +41,14 @@ async def fetch_user_age(wrapper: RunContextWrapper[UserInfo]) -> str: # (2)!
4141
return f"User {wrapper.context.name} is 47 years old"
4242

4343
async def main():
44-
user_info = UserInfo(name="John", uid=123) # (3)!
44+
user_info = UserInfo(name="John", uid=123)
4545

46-
agent = Agent[UserInfo]( # (4)!
46+
agent = Agent[UserInfo]( # (3)!
4747
name="Assistant",
4848
tools=[fetch_user_age],
4949
)
5050

51-
result = await Runner.run(
51+
result = await Runner.run( # (4)!
5252
starting_agent=agent,
5353
input="What is the age of the user?",
5454
context=user_info,

docs/guardrails.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Output guardrails run in 3 steps:
2929

3030
!!! Note
3131

32-
Output guardrails are intended to run on the final agent input, so an agent's guardrails only run if the agent is the *last* agent. Similar to the input guardrails, we do this because guardrails tend to be related to the actual Agent - you'd run different guardrails for different agents, so colocating the code is useful for readability.
32+
Output guardrails are intended to run on the final agent output, so an agent's guardrails only run if the agent is the *last* agent. Similar to the input guardrails, we do this because guardrails tend to be related to the actual Agent - you'd run different guardrails for different agents, so colocating the code is useful for readability.
3333

3434
## Tripwires
3535

docs/mcp.md

+51
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
# Model context protocol
2+
3+
The [Model context protocol](https://modelcontextprotocol.io/introduction) (aka MCP) is a way to provide tools and context to the LLM. From the MCP docs:
4+
5+
> MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
6+
7+
The Agents SDK has support for MCP. This enables you to use a wide range of MCP servers to provide tools to your Agents.
8+
9+
## MCP servers
10+
11+
Currently, the MCP spec defines two kinds of servers, based on the transport mechanism they use:
12+
13+
1. **stdio** servers run as a subprocess of your application. You can think of them as running "locally".
14+
2. **HTTP over SSE** servers run remotely. You connect to them via a URL.
15+
16+
You can use the [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse] classes to connect to these servers.
17+
18+
For example, this is how you'd use the [official MCP filesystem server](https://www.npmjs.com/package/@modelcontextprotocol/server-filesystem).
19+
20+
```python
21+
async with MCPServerStdio(
22+
params={
23+
"command": "npx",
24+
"args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir],
25+
}
26+
) as server:
27+
tools = await server.list_tools()
28+
```
29+
30+
## Using MCP servers
31+
32+
MCP servers can be added to Agents. The Agents SDK will call `list_tools()` on the MCP servers each time the Agent is run. This makes the LLM aware of the MCP server's tools. When the LLM calls a tool from an MCP server, the SDK calls `call_tool()` on that server.
33+
34+
```python
35+
36+
agent=Agent(
37+
name="Assistant",
38+
instructions="Use the tools to achieve the task",
39+
mcp_servers=[mcp_server_1, mcp_server_2]
40+
)
41+
```
42+
43+
## Caching
44+
45+
Every time an Agent runs, it calls `list_tools()` on the MCP server. This can be a latency hit, especially if the server is a remote server. To automatically cache the list of tools, you can pass `cache_tools_list=True` to both [`MCPServerStdio`][agents.mcp.server.MCPServerStdio] and [`MCPServerSse`][agents.mcp.server.MCPServerSse]. You should only do this if you're certain the tool list will not change.
46+
47+
If you want to invalidate the cache, you can call `invalidate_tools_cache()` on the servers.
48+
49+
## End-to-end example
50+
51+
View complete working examples at [examples/mcp](https://github.com/openai/openai-agents-python/tree/main/examples/mcp).

docs/ref/mcp/server.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `MCP Servers`
2+
3+
::: agents.mcp.server

docs/ref/mcp/util.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `MCP Util`
2+
3+
::: agents.mcp.util

docs/ref/voice/events.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Events`
2+
3+
::: agents.voice.events

docs/ref/voice/exceptions.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Exceptions`
2+
3+
::: agents.voice.exceptions

docs/ref/voice/input.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Input`
2+
3+
::: agents.voice.input

docs/ref/voice/model.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Model`
2+
3+
::: agents.voice.model
+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `OpenAIVoiceModelProvider`
2+
3+
::: agents.voice.models.openai_model_provider

docs/ref/voice/models/openai_stt.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `OpenAI STT`
2+
3+
::: agents.voice.models.openai_stt

docs/ref/voice/models/openai_tts.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `OpenAI TTS`
2+
3+
::: agents.voice.models.openai_tts

docs/ref/voice/pipeline.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Pipeline`
2+
3+
::: agents.voice.pipeline

docs/ref/voice/pipeline_config.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Pipeline Config`
2+
3+
::: agents.voice.pipeline_config

docs/ref/voice/result.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Result`
2+
3+
::: agents.voice.result

docs/ref/voice/utils.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Utils`
2+
3+
::: agents.voice.utils

docs/ref/voice/workflow.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# `Workflow`
2+
3+
::: agents.voice.workflow

docs/tracing.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@ By default, the SDK traces the following:
3535
- Function tool calls are each wrapped in `function_span()`
3636
- Guardrails are wrapped in `guardrail_span()`
3737
- Handoffs are wrapped in `handoff_span()`
38+
- Audio inputs (speech-to-text) are wrapped in a `transcription_span()`
39+
- Audio outputs (text-to-speech) are wrapped in a `speech_span()`
40+
- Related audio spans may be parented under a `speech_group_span()`
3841

3942
By default, the trace is named "Agent trace". You can set this name if you use `trace`, or you can can configure the name and other properties with the [`RunConfig`][agents.run.RunConfig].
4043

@@ -76,7 +79,11 @@ Spans are automatically part of the current trace, and are nested under the near
7679

7780
## Sensitive data
7881

79-
Some spans track potentially sensitive data. For example, the `generation_span()` stores the inputs/outputs of the LLM generation, and `function_span()` stores the inputs/outputs of function calls. These may contain sensitive data, so you can disable capturing that data via [`RunConfig.trace_include_sensitive_data`][agents.run.RunConfig.trace_include_sensitive_data].
82+
Certain spans may capture potentially sensitive data.
83+
84+
The `generation_span()` stores the inputs/outputs of the LLM generation, and `function_span()` stores the inputs/outputs of function calls. These may contain sensitive data, so you can disable capturing that data via [`RunConfig.trace_include_sensitive_data`][agents.run.RunConfig.trace_include_sensitive_data].
85+
86+
Similarly, Audio spans include base64-encoded PCM data for input and output audio by default. You can disable capturing this audio data by configuring [`VoicePipelineConfig.trace_include_sensitive_audio_data`][agents.voice.pipeline_config.VoicePipelineConfig.trace_include_sensitive_audio_data].
8087

8188
## Custom tracing processors
8289

@@ -92,6 +99,7 @@ To customize this default setup, to send traces to alternative or additional bac
9299

93100
## External tracing processors list
94101

102+
- [Weights & Biases](https://weave-docs.wandb.ai/guides/integrations/openai_agents)
95103
- [Arize-Phoenix](https://docs.arize.com/phoenix/tracing/integrations-tracing/openai-agents-sdk)
96104
- [MLflow](https://mlflow.org/docs/latest/tracing/integrations/openai-agent)
97105
- [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk)
@@ -102,3 +110,4 @@ To customize this default setup, to send traces to alternative or additional bac
102110
- [LangSmith](https://docs.smith.langchain.com/observability/how_to_guides/trace_with_openai_agents_sdk)
103111
- [Maxim AI](https://www.getmaxim.ai/docs/observe/integrations/openai-agents-sdk)
104112
- [Comet Opik](https://www.comet.com/docs/opik/tracing/integrations/openai_agents)
113+
- [Langfuse](https://langfuse.com/docs/integrations/openaiagentssdk/openai-agents)

docs/visualization.md

+86
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
# Agent Visualization
2+
3+
Agent visualization allows you to generate a structured graphical representation of agents and their relationships using **Graphviz**. This is useful for understanding how agents, tools, and handoffs interact within an application.
4+
5+
## Installation
6+
7+
Install the optional `viz` dependency group:
8+
9+
```bash
10+
pip install "openai-agents[viz]"
11+
```
12+
13+
## Generating a Graph
14+
15+
You can generate an agent visualization using the `draw_graph` function. This function creates a directed graph where:
16+
17+
- **Agents** are represented as yellow boxes.
18+
- **Tools** are represented as green ellipses.
19+
- **Handoffs** are directed edges from one agent to another.
20+
21+
### Example Usage
22+
23+
```python
24+
from agents import Agent, function_tool
25+
from agents.extensions.visualization import draw_graph
26+
27+
@function_tool
28+
def get_weather(city: str) -> str:
29+
return f"The weather in {city} is sunny."
30+
31+
spanish_agent = Agent(
32+
name="Spanish agent",
33+
instructions="You only speak Spanish.",
34+
)
35+
36+
english_agent = Agent(
37+
name="English agent",
38+
instructions="You only speak English",
39+
)
40+
41+
triage_agent = Agent(
42+
name="Triage agent",
43+
instructions="Handoff to the appropriate agent based on the language of the request.",
44+
handoffs=[spanish_agent, english_agent],
45+
tools=[get_weather],
46+
)
47+
48+
draw_graph(triage_agent)
49+
```
50+
51+
![Agent Graph](./assets/images/graph.png)
52+
53+
This generates a graph that visually represents the structure of the **triage agent** and its connections to sub-agents and tools.
54+
55+
56+
## Understanding the Visualization
57+
58+
The generated graph includes:
59+
60+
- A **start node** (`__start__`) indicating the entry point.
61+
- Agents represented as **rectangles** with yellow fill.
62+
- Tools represented as **ellipses** with green fill.
63+
- Directed edges indicating interactions:
64+
- **Solid arrows** for agent-to-agent handoffs.
65+
- **Dotted arrows** for tool invocations.
66+
- An **end node** (`__end__`) indicating where execution terminates.
67+
68+
## Customizing the Graph
69+
70+
### Showing the Graph
71+
By default, `draw_graph` displays the graph inline. To show the graph in a separate window, write the following:
72+
73+
```python
74+
draw_graph(triage_agent).view()
75+
```
76+
77+
### Saving the Graph
78+
By default, `draw_graph` displays the graph inline. To save it as a file, specify a filename:
79+
80+
```python
81+
draw_graph(triage_agent, filename="agent_graph.png")
82+
```
83+
84+
This will generate `agent_graph.png` in the working directory.
85+
86+

0 commit comments

Comments
 (0)