Skip to content

Commit 854df97

Browse files
committed
feat(api): add /v1/responses and built-in tools
[platform.openai.com/docs/changelog](http://platform.openai.com/docs/changelog)
1 parent a730f0e commit 854df97

File tree

196 files changed

+10058
-1333
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

196 files changed

+10058
-1333
lines changed

.stats.yml

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
configured_endpoints: 74
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-b524aed1c2c5c928aa4e2c546f5dbb364e7b4d5027daf05e42e210b05a97c3c6.yml
1+
configured_endpoints: 81
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai-be834d63e326a82494e819085137f5eb15866f3fc787db1f3afe7168d419e18a.yml

README.md

+79-150
Original file line numberDiff line numberDiff line change
@@ -10,13 +10,10 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
1010

1111
## Documentation
1212

13-
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md).
13+
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs/api-reference). The full API of this library can be found in [api.md](api.md).
1414

1515
## Installation
1616

17-
> [!IMPORTANT]
18-
> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.
19-
2017
```sh
2118
# install from PyPI
2219
pip install openai
@@ -26,46 +23,69 @@ pip install openai
2623

2724
The full API of this library can be found in [api.md](api.md).
2825

26+
The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
27+
2928
```python
3029
import os
3130
from openai import OpenAI
3231

3332
client = OpenAI(
34-
api_key=os.environ.get("OPENAI_API_KEY"), # This is the default and can be omitted
33+
# This is the default and can be omitted
34+
api_key=os.environ.get("OPENAI_API_KEY"),
35+
)
36+
37+
response = client.responses.create(
38+
model="gpt-4o",
39+
instructions="You are a coding assistant that talks like a pirate.",
40+
input="How do I check if a Python object is an instance of a class?",
3541
)
3642

37-
chat_completion = client.chat.completions.create(
43+
print(response.output_text)
44+
```
45+
46+
The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
47+
48+
```python
49+
from openai import OpenAI
50+
51+
client = OpenAI()
52+
53+
completion = client.chat.completions.create(
54+
model="gpt-4o",
3855
messages=[
56+
{"role": "developer", "content": "Talk like a pirate."},
3957
{
4058
"role": "user",
41-
"content": "Say this is a test",
42-
}
59+
"content": "How do I check if a Python object is an instance of a class?",
60+
},
4361
],
44-
model="gpt-4o",
4562
)
63+
64+
print(completion.choices[0].message.content)
4665
```
4766

4867
While you can provide an `api_key` keyword argument,
4968
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
5069
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
51-
so that your API Key is not stored in source control.
70+
so that your API key is not stored in source control.
71+
[Get an API key here](https://platform.openai.com/settings/organization/api-keys).
5272

5373
### Vision
5474

55-
With a hosted image:
75+
With an image URL:
5676

5777
```python
58-
response = client.chat.completions.create(
78+
prompt = "What is in this image?"
79+
img_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/d5/2023_06_08_Raccoon1.jpg/1599px-2023_06_08_Raccoon1.jpg"
80+
81+
response = client.responses.create(
5982
model="gpt-4o-mini",
60-
messages=[
83+
input=[
6184
{
6285
"role": "user",
6386
"content": [
64-
{"type": "text", "text": prompt},
65-
{
66-
"type": "image_url",
67-
"image_url": {"url": f"{img_url}"},
68-
},
87+
{"type": "input_text", "text": prompt},
88+
{"type": "input_image", "image_url": f"{img_url}"},
6989
],
7090
}
7191
],
@@ -75,73 +95,29 @@ response = client.chat.completions.create(
7595
With the image as a base64 encoded string:
7696

7797
```python
78-
response = client.chat.completions.create(
98+
import base64
99+
from openai import OpenAI
100+
101+
client = OpenAI()
102+
103+
prompt = "What is in this image?"
104+
with open("path/to/image.png", "rb") as image_file:
105+
b64_image = base64.b64encode(image_file.read()).decode("utf-8")
106+
107+
response = client.responses.create(
79108
model="gpt-4o-mini",
80-
messages=[
109+
input=[
81110
{
82111
"role": "user",
83112
"content": [
84-
{"type": "text", "text": prompt},
85-
{
86-
"type": "image_url",
87-
"image_url": {"url": f"data:{img_type};base64,{img_b64_str}"},
88-
},
113+
{"type": "input_text", "text": prompt},
114+
{"type": "input_image", "image_url": f"data:image/png;base64,{b64_image}"},
89115
],
90116
}
91117
],
92118
)
93119
```
94120

95-
### Polling Helpers
96-
97-
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
98-
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
99-
If an API method results in an action that could benefit from polling there will be a corresponding version of the
100-
method ending in '\_and_poll'.
101-
102-
For instance to create a Run and poll until it reaches a terminal state you can run:
103-
104-
```python
105-
run = client.beta.threads.runs.create_and_poll(
106-
thread_id=thread.id,
107-
assistant_id=assistant.id,
108-
)
109-
```
110-
111-
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)
112-
113-
### Bulk Upload Helpers
114-
115-
When creating and interacting with vector stores, you can use polling helpers to monitor the status of operations.
116-
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
117-
118-
```python
119-
sample_files = [Path("sample-paper.pdf"), ...]
120-
121-
batch = await client.vector_stores.file_batches.upload_and_poll(
122-
store.id,
123-
files=sample_files,
124-
)
125-
```
126-
127-
### Streaming Helpers
128-
129-
The SDK also includes helpers to process streams and handle incoming events.
130-
131-
```python
132-
with client.beta.threads.runs.stream(
133-
thread_id=thread.id,
134-
assistant_id=assistant.id,
135-
instructions="Please address the user as Jane Doe. The user has a premium account.",
136-
) as stream:
137-
for event in stream:
138-
# Print the text from text delta events
139-
if event.type == "thread.message.delta" and event.data.delta.content:
140-
print(event.data.delta.content[0].text)
141-
```
142-
143-
More information on streaming helpers can be found in the dedicated documentation: [helpers.md](helpers.md)
144-
145121
## Async usage
146122

147123
Simply import `AsyncOpenAI` instead of `OpenAI` and use `await` with each API call:
@@ -152,20 +128,16 @@ import asyncio
152128
from openai import AsyncOpenAI
153129

154130
client = AsyncOpenAI(
155-
api_key=os.environ.get("OPENAI_API_KEY"), # This is the default and can be omitted
131+
# This is the default and can be omitted
132+
api_key=os.environ.get("OPENAI_API_KEY"),
156133
)
157134

158135

159136
async def main() -> None:
160-
chat_completion = await client.chat.completions.create(
161-
messages=[
162-
{
163-
"role": "user",
164-
"content": "Say this is a test",
165-
}
166-
],
167-
model="gpt-4o",
137+
response = await client.responses.create(
138+
model="gpt-4o", input="Explain disestablishmentarianism to a smart five year old."
168139
)
140+
print(response.output_text)
169141

170142

171143
asyncio.run(main())
@@ -182,18 +154,14 @@ from openai import OpenAI
182154

183155
client = OpenAI()
184156

185-
stream = client.chat.completions.create(
186-
messages=[
187-
{
188-
"role": "user",
189-
"content": "Say this is a test",
190-
}
191-
],
157+
stream = client.responses.create(
192158
model="gpt-4o",
159+
input="Write a one-sentence bedtime story about a unicorn.",
193160
stream=True,
194161
)
195-
for chunk in stream:
196-
print(chunk.choices[0].delta.content or "", end="")
162+
163+
for event in stream:
164+
print(event)
197165
```
198166

199167
The async client uses the exact same interface.
@@ -206,58 +174,19 @@ client = AsyncOpenAI()
206174

207175

208176
async def main():
209-
stream = await client.chat.completions.create(
210-
model="gpt-4",
211-
messages=[{"role": "user", "content": "Say this is a test"}],
177+
stream = client.responses.create(
178+
model="gpt-4o",
179+
input="Write a one-sentence bedtime story about a unicorn.",
212180
stream=True,
213181
)
214-
async for chunk in stream:
215-
print(chunk.choices[0].delta.content or "", end="")
216-
217-
218-
asyncio.run(main())
219-
```
220-
221-
## Module-level client
222-
223-
> [!IMPORTANT]
224-
> We highly recommend instantiating client instances instead of relying on the global client.
225182

226-
We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.
227-
228-
```py
229-
import openai
230-
231-
# optional; defaults to `os.environ['OPENAI_API_KEY']`
232-
openai.api_key = '...'
183+
for event in stream:
184+
print(event)
233185

234-
# all client options can be configured just like the `OpenAI` instantiation counterpart
235-
openai.base_url = "https://..."
236-
openai.default_headers = {"x-foo": "true"}
237186

238-
completion = openai.chat.completions.create(
239-
model="gpt-4o",
240-
messages=[
241-
{
242-
"role": "user",
243-
"content": "How do I output all files in a directory using Python?",
244-
},
245-
],
246-
)
247-
print(completion.choices[0].message.content)
187+
asyncio.run(main())
248188
```
249189

250-
The API is the exact same as the standard client instance-based API.
251-
252-
This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.
253-
254-
We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:
255-
256-
- It can be difficult to reason about where client options are configured
257-
- It's not possible to change certain client options without potentially causing race conditions
258-
- It's harder to mock for testing purposes
259-
- It's not possible to control cleanup of network connections
260-
261190
## Realtime API beta
262191

263192
The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.
@@ -304,7 +233,7 @@ However the real magic of the Realtime API is handling audio inputs / outputs, s
304233

305234
### Realtime error handling
306235

307-
Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as *no errors are raised directly* by the SDK when an `error` event comes in.
236+
Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.
308237

309238
```py
310239
client = AsyncOpenAI()
@@ -408,11 +337,11 @@ from openai import OpenAI
408337

409338
client = OpenAI()
410339

411-
completion = client.chat.completions.create(
412-
messages=[
340+
response = client.chat.responses.create(
341+
input=[
413342
{
414343
"role": "user",
415-
"content": "Can you generate an example json object describing a fruit?",
344+
"content": "How much ?",
416345
}
417346
],
418347
model="gpt-4o",
@@ -489,15 +418,16 @@ Error codes are as follows:
489418
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
490419

491420
```python
492-
completion = await client.chat.completions.create(
493-
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
421+
response = await client.responses.create(
422+
model="gpt-4o-mini",
423+
input="Say 'this is a test'.",
494424
)
495-
print(completion._request_id) # req_123
425+
print(response._request_id) # req_123
496426
```
497427

498428
Note that unlike other properties that use an `_` prefix, the `_request_id` property
499-
*is* public. Unless documented otherwise, *all* other `_` prefix properties,
500-
methods and modules are *private*.
429+
_is_ public. Unless documented otherwise, _all_ other `_` prefix properties,
430+
methods and modules are _private_.
501431

502432
> [!IMPORTANT]
503433
> If you need to access request IDs for failed requests you must catch the `APIStatusError` exception
@@ -514,8 +444,7 @@ except openai.APIStatusError as exc:
514444
raise exc
515445
```
516446

517-
518-
### Retries
447+
## Retries
519448

520449
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
521450
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
@@ -544,7 +473,7 @@ client.with_options(max_retries=5).chat.completions.create(
544473
)
545474
```
546475

547-
### Timeouts
476+
## Timeouts
548477

549478
By default requests time out after 10 minutes. You can configure this with a `timeout` option,
550479
which accepts a float or an [`httpx.Timeout`](https://www.python-httpx.org/advanced/timeouts/#fine-tuning-the-configuration) object:

0 commit comments

Comments
 (0)