Skip to content

Commit 3b69211

Browse files
rm-openaiDale
authored and
Dale
committed
Merge pull request openai#113 from openai/examples/jupyter
Adding Jupyter notebook example
2 parents 1670d40 + 26f9cb4 commit 3b69211

12 files changed

+1293
-73
lines changed

README.md

+5-3
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,8 @@ Notably, our SDK [is compatible](https://openai.github.io/openai-agents-python/m
2121

2222
```
2323
python -m venv env
24-
source env/bin/activate
25-
```
24+
.\env\Scripts\activate
25+
2626
2727
2. Install Agents SDK
2828
@@ -47,9 +47,11 @@ print(result.final_output)
4747

4848
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
4949

50+
(_For Jupyter notebook users, see [hello_world_jupyter.py](examples/basic/hello_world_jupyter.py)_)
51+
5052
## Handoffs example
5153

52-
```py
54+
```python
5355
from agents import Agent, Runner
5456
import asyncio
5557

docs/config.md

+29-66
Original file line numberDiff line numberDiff line change
@@ -1,94 +1,57 @@
11
# Configuring the SDK
22

33
## API keys and clients
4+
# 配置API密钥和客户端
5+
# 默认情况下SDK会从环境变量OPENAI_API_KEY获取API密钥
46

5-
By default, the SDK looks for the `OPENAI_API_KEY` environment variable for LLM requests and tracing, as soon as it is imported. If you are unable to set that environment variable before your app starts, you can use the [set_default_openai_key()][agents.set_default_openai_key] function to set the key.
6-
7-
```python
7+
# 方法1:直接设置默认API密钥
88
from agents import set_default_openai_key
9+
set_default_openai_key("sk-...") # 替换为你的实际API密钥
910

10-
set_default_openai_key("sk-...")
11-
```
12-
13-
Alternatively, you can also configure an OpenAI client to be used. By default, the SDK creates an `AsyncOpenAI` instance, using the API key from the environment variable or the default key set above. You can change this by using the [set_default_openai_client()][agents.set_default_openai_client] function.
14-
15-
```python
11+
# 方法2:自定义OpenAI客户端
1612
from openai import AsyncOpenAI
1713
from agents import set_default_openai_client
14+
custom_client = AsyncOpenAI(
15+
base_url="...", # 自定义API端点
16+
api_key="..." # 自定义API密钥
17+
)
18+
set_default_openai_client(custom_client) # 设置自定义客户端
1819

19-
custom_client = AsyncOpenAI(base_url="...", api_key="...")
20-
set_default_openai_client(custom_client)
21-
```
22-
23-
Finally, you can also customize the OpenAI API that is used. By default, we use the OpenAI Responses API. You can override this to use the Chat Completions API by using the [set_default_openai_api()][agents.set_default_openai_api] function.
24-
25-
```python
20+
# 方法3:选择使用的OpenAI API类型
2621
from agents import set_default_openai_api
27-
28-
set_default_openai_api("chat_completions")
29-
```
22+
set_default_openai_api("chat_completions") # 使用聊天补全API
3023

3124
## Tracing
25+
# 追踪功能配置
26+
# 默认启用,使用与LLM相同的API密钥
3227

33-
Tracing is enabled by default. It uses the OpenAI API keys from the section above by default (i.e. the environment variable or the default key you set). You can specifically set the API key used for tracing by using the [`set_tracing_export_api_key`][agents.set_tracing_export_api_key] function.
34-
35-
```python
28+
# 设置专门的追踪API密钥
3629
from agents import set_tracing_export_api_key
30+
set_tracing_export_api_key("sk-...") # 设置追踪专用密钥
3731

38-
set_tracing_export_api_key("sk-...")
39-
```
40-
41-
You can also disable tracing entirely by using the [`set_tracing_disabled()`][agents.set_tracing_disabled] function.
42-
43-
```python
32+
# 禁用追踪功能
4433
from agents import set_tracing_disabled
45-
46-
set_tracing_disabled(True)
47-
```
34+
set_tracing_disabled(True) # 传入True禁用追踪
4835

4936
## Debug logging
37+
# 调试日志配置
38+
# 默认只输出警告和错误
5039

51-
The SDK has two Python loggers without any handlers set. By default, this means that warnings and errors are sent to `stdout`, but other logs are suppressed.
52-
53-
To enable verbose logging, use the [`enable_verbose_stdout_logging()`][agents.enable_verbose_stdout_logging] function.
54-
55-
```python
40+
# 启用详细日志输出
5641
from agents import enable_verbose_stdout_logging
42+
enable_verbose_stdout_logging() # 启用详细日志
5743

58-
enable_verbose_stdout_logging()
59-
```
60-
61-
Alternatively, you can customize the logs by adding handlers, filters, formatters, etc. You can read more in the [Python logging guide](https://docs.python.org/3/howto/logging.html).
62-
63-
```python
44+
# 自定义日志配置
6445
import logging
65-
66-
logger = logging.getLogger("openai.agents") # or openai.agents.tracing for the Tracing logger
67-
68-
# To make all logs show up
69-
logger.setLevel(logging.DEBUG)
70-
# To make info and above show up
71-
logger.setLevel(logging.INFO)
72-
# To make warning and above show up
73-
logger.setLevel(logging.WARNING)
74-
# etc
75-
76-
# You can customize this as needed, but this will output to `stderr` by default
77-
logger.addHandler(logging.StreamHandler())
78-
```
46+
logger = logging.getLogger("openai.agents") # 获取SDK日志器
47+
logger.setLevel(logging.DEBUG) # 设置日志级别
48+
logger.addHandler(logging.StreamHandler()) # 添加控制台处理器
7949

8050
### Sensitive data in logs
51+
# 敏感数据日志控制
8152

82-
Certain logs may contain sensitive data (for example, user data). If you want to disable this data from being logged, set the following environment variables.
83-
84-
To disable logging LLM inputs and outputs:
85-
86-
```bash
53+
# 禁用模型数据日志
8754
export OPENAI_AGENTS_DONT_LOG_MODEL_DATA=1
88-
```
89-
90-
To disable logging tool inputs and outputs:
9155

92-
```bash
56+
# 禁用工具数据日志
9357
export OPENAI_AGENTS_DONT_LOG_TOOL_DATA=1
94-
```

examples/basic/hello_world.py

+18-4
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,30 @@
11
import asyncio
2-
3-
from agents import Agent, Runner
4-
2+
from openai import AsyncOpenAI
3+
from agents import Agent, Runner, ModelSettings, OpenAIChatCompletionsModel, set_default_openai_client
54

65
async def main():
6+
# 设置自定义OpenAI客户端
7+
custom_client = AsyncOpenAI(
8+
# 自定义API端点
9+
base_url="https://ark.cn-beijing.volces.com/api/v3",
10+
# 自定义API密钥
11+
api_key="c9e7d8b1-ca29-4c1d-85bb-68fa9d399a6d"
12+
)
13+
set_default_openai_client(custom_client,use_for_tracing=False)
14+
715
agent = Agent(
816
name="Assistant",
917
instructions="You only respond in haikus.",
18+
model=OpenAIChatCompletionsModel(
19+
model="ep-20250217151433-6xcvv",
20+
openai_client=custom_client,
21+
),
22+
# model_settings=ModelSettings(temperature=0.5)
1023
)
1124

12-
result = await Runner.run(agent, "Tell me about recursion in programming.")
25+
result = await Runner.run(agent, "你好")
1326
print(result.final_output)
27+
# Tell me about recursion in programming.
1428
# Function calls itself,
1529
# Looping in smaller pieces,
1630
# Endless by design.

examples/basic/hello_world_jupyter.py

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
from agents import Agent, Runner
2+
3+
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
4+
5+
# Intended for Jupyter notebooks where there's an existing event loop
6+
result = await Runner.run(agent, "Write a haiku about recursion in programming.") # type: ignore[top-level-await] # noqa: F704
7+
print(result.final_output)
8+
9+
# Code within code loops,
10+
# Infinite mirrors reflect—
11+
# Logic folds on self.

examples/test/README.md

+178
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,178 @@
1+
# OpenAI Agents SDK
2+
3+
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.
4+
5+
<img src="https://cdn.openai.com/API/docs/images/orchestration.png" alt="Image of the Agents Tracing UI" style="max-height: 803px;">
6+
7+
### Core concepts:
8+
9+
1. [**Agents**](https://openai.github.io/openai-agents-python/agents): LLMs configured with instructions, tools, guardrails, and handoffs
10+
2. [**Handoffs**](https://openai.github.io/openai-agents-python/handoffs/): Allow agents to transfer control to other agents for specific tasks
11+
3. [**Guardrails**](https://openai.github.io/openai-agents-python/guardrails/): Configurable safety checks for input and output validation
12+
4. [**Tracing**](https://openai.github.io/openai-agents-python/tracing/): Built-in tracking of agent runs, allowing you to view, debug and optimize your workflows
13+
14+
Explore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
15+
16+
Notably, our SDK [is compatible](https://openai.github.io/openai-agents-python/models/) with any model providers that support the OpenAI Chat Completions API format.
17+
18+
## Get started
19+
20+
1. Set up your Python environment
21+
22+
```
23+
python -m venv env
24+
.\env\Scripts\activate
25+
26+
27+
2. Install Agents SDK
28+
29+
```
30+
pip install openai-agents
31+
```
32+
33+
## Hello world example
34+
35+
```python
36+
from agents import Agent, Runner
37+
38+
agent = Agent(name="Assistant", instructions="You are a helpful assistant")
39+
40+
result = Runner.run_sync(agent, "Write a haiku about recursion in programming.")
41+
print(result.final_output)
42+
43+
# Code within the code,
44+
# Functions calling themselves,
45+
# Infinite loop's dance.
46+
```
47+
48+
(_If running this, ensure you set the `OPENAI_API_KEY` environment variable_)
49+
50+
(_For Jupyter notebook users, see [hello_world_jupyter.py](examples/basic/hello_world_jupyter.py)_)
51+
52+
## Handoffs example
53+
54+
```python
55+
from agents import Agent, Runner
56+
import asyncio
57+
58+
spanish_agent = Agent(
59+
name="Spanish agent",
60+
instructions="You only speak Spanish.",
61+
)
62+
63+
english_agent = Agent(
64+
name="English agent",
65+
instructions="You only speak English",
66+
)
67+
68+
triage_agent = Agent(
69+
name="Triage agent",
70+
instructions="Handoff to the appropriate agent based on the language of the request.",
71+
handoffs=[spanish_agent, english_agent],
72+
)
73+
74+
75+
async def main():
76+
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
77+
print(result.final_output)
78+
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
79+
80+
81+
if __name__ == "__main__":
82+
asyncio.run(main())
83+
```
84+
85+
## Functions example
86+
87+
```python
88+
import asyncio
89+
90+
from agents import Agent, Runner, function_tool
91+
92+
93+
@function_tool
94+
def get_weather(city: str) -> str:
95+
return f"The weather in {city} is sunny."
96+
97+
98+
agent = Agent(
99+
name="Hello world",
100+
instructions="You are a helpful agent.",
101+
tools=[get_weather],
102+
)
103+
104+
105+
async def main():
106+
result = await Runner.run(agent, input="What's the weather in Tokyo?")
107+
print(result.final_output)
108+
# The weather in Tokyo is sunny.
109+
110+
111+
if __name__ == "__main__":
112+
asyncio.run(main())
113+
```
114+
115+
## The agent loop
116+
117+
When you call `Runner.run()`, we run a loop until we get a final output.
118+
119+
1. We call the LLM, using the model and settings on the agent, and the message history.
120+
2. The LLM returns a response, which may include tool calls.
121+
3. If the response has a final output (see below for more on this), we return it and end the loop.
122+
4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
123+
5. We process the tool calls (if any) and append the tool responses messages. Then we go to step 1.
124+
125+
There is a `max_turns` parameter that you can use to limit the number of times the loop executes.
126+
127+
### Final output
128+
129+
Final output is the last thing the agent produces in the loop.
130+
131+
1. If you set an `output_type` on the agent, the final output is when the LLM returns something of that type. We use [structured outputs](https://platform.openai.com/docs/guides/structured-outputs) for this.
132+
2. If there's no `output_type` (i.e. plain text responses), then the first LLM response without any tool calls or handoffs is considered as the final output.
133+
134+
As a result, the mental model for the agent loop is:
135+
136+
1. If the current agent has an `output_type`, the loop runs until the agent produces structured output matching that type.
137+
2. If the current agent does not have an `output_type`, the loop runs until the current agent produces a message without any tool calls/handoffs.
138+
139+
## Common agent patterns
140+
141+
The Agents SDK is designed to be highly flexible, allowing you to model a wide range of LLM workflows including deterministic flows, iterative loops, and more. See examples in [`examples/agent_patterns`](examples/agent_patterns).
142+
143+
## Tracing
144+
145+
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), and [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing).
146+
147+
## Development (only needed if you need to edit the SDK/examples)
148+
149+
0. Ensure you have [`uv`](https://docs.astral.sh/uv/) installed.
150+
151+
```bash
152+
uv --version
153+
```
154+
155+
1. Install dependencies
156+
157+
```bash
158+
make sync
159+
```
160+
161+
2. (After making changes) lint/test
162+
163+
```
164+
make tests # run tests
165+
make mypy # run typechecker
166+
make lint # run linter
167+
```
168+
169+
## Acknowledgements
170+
171+
We'd like to acknowledge the excellent work of the open-source community, especially:
172+
173+
- [Pydantic](https://docs.pydantic.dev/latest/) (data validation) and [PydanticAI](https://ai.pydantic.dev/) (advanced agent framework)
174+
- [MkDocs](https://github.com/squidfunk/mkdocs-material)
175+
- [Griffe](https://github.com/mkdocstrings/griffe)
176+
- [uv](https://github.com/astral-sh/uv) and [ruff](https://github.com/astral-sh/ruff)
177+
178+
We're committed to continuing to build the Agents SDK as an open source framework so others in the community can expand on our approach.

0 commit comments

Comments
 (0)