You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+79-150
Original file line number
Diff line number
Diff line change
@@ -10,13 +10,10 @@ It is generated from our [OpenAPI specification](https://github.com/openai/opena
10
10
11
11
## Documentation
12
12
13
-
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). The full API of this library can be found in [api.md](api.md).
13
+
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs/api-reference). The full API of this library can be found in [api.md](api.md).
14
14
15
15
## Installation
16
16
17
-
> [!IMPORTANT]
18
-
> The SDK was rewritten in v1, which was released November 6th 2023. See the [v1 migration guide](https://github.com/openai/openai-python/discussions/742), which includes scripts to automatically update your code.
19
-
20
17
```sh
21
18
# install from PyPI
22
19
pip install openai
@@ -26,46 +23,69 @@ pip install openai
26
23
27
24
The full API of this library can be found in [api.md](api.md).
28
25
26
+
The primary API for interacting with OpenAI models is the [Responses API](https://platform.openai.com/docs/api-reference/responses). You can generate text from the model with the code below.
27
+
29
28
```python
30
29
import os
31
30
from openai import OpenAI
32
31
33
32
client = OpenAI(
34
-
api_key=os.environ.get("OPENAI_API_KEY"), # This is the default and can be omitted
33
+
# This is the default and can be omitted
34
+
api_key=os.environ.get("OPENAI_API_KEY"),
35
+
)
36
+
37
+
response = client.responses.create(
38
+
model="gpt-4o",
39
+
instructions="You are a coding assistant that talks like a pirate.",
40
+
input="How do I check if a Python object is an instance of a class?",
35
41
)
36
42
37
-
chat_completion = client.chat.completions.create(
43
+
print(response.output_text)
44
+
```
45
+
46
+
The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below.
47
+
48
+
```python
49
+
from openai import OpenAI
50
+
51
+
client = OpenAI()
52
+
53
+
completion = client.chat.completions.create(
54
+
model="gpt-4o",
38
55
messages=[
56
+
{"role": "developer", "content": "Talk like a pirate."},
39
57
{
40
58
"role": "user",
41
-
"content": "Say this is a test",
42
-
}
59
+
"content": "How do I check if a Python object is an instance of a class?",
60
+
},
43
61
],
44
-
model="gpt-4o",
45
62
)
63
+
64
+
print(completion.choices[0].message.content)
46
65
```
47
66
48
67
While you can provide an `api_key` keyword argument,
49
68
we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/)
50
69
to add `OPENAI_API_KEY="My API Key"` to your `.env` file
51
-
so that your API Key is not stored in source control.
70
+
so that your API key is not stored in source control.
71
+
[Get an API key here](https://platform.openai.com/settings/organization/api-keys).
When interacting with the API some actions such as starting a Run and adding files to vector stores are asynchronous and take time to complete. The SDK includes
98
-
helper functions which will poll the status until it reaches a terminal state and then return the resulting object.
99
-
If an API method results in an action that could benefit from polling there will be a corresponding version of the
100
-
method ending in '\_and_poll'.
101
-
102
-
For instance to create a Run and poll until it reaches a terminal state you can run:
103
-
104
-
```python
105
-
run = client.beta.threads.runs.create_and_poll(
106
-
thread_id=thread.id,
107
-
assistant_id=assistant.id,
108
-
)
109
-
```
110
-
111
-
More information on the lifecycle of a Run can be found in the [Run Lifecycle Documentation](https://platform.openai.com/docs/assistants/how-it-works/run-lifecycle)
112
-
113
-
### Bulk Upload Helpers
114
-
115
-
When creating and interacting with vector stores, you can use polling helpers to monitor the status of operations.
116
-
For convenience, we also provide a bulk upload helper to allow you to simultaneously upload several files at once.
> We highly recommend instantiating client instances instead of relying on the global client.
225
182
226
-
We also expose a global client instance that is accessible in a similar fashion to versions prior to v1.
227
-
228
-
```py
229
-
import openai
230
-
231
-
# optional; defaults to `os.environ['OPENAI_API_KEY']`
232
-
openai.api_key ='...'
183
+
for event in stream:
184
+
print(event)
233
185
234
-
# all client options can be configured just like the `OpenAI` instantiation counterpart
235
-
openai.base_url ="https://..."
236
-
openai.default_headers = {"x-foo": "true"}
237
186
238
-
completion = openai.chat.completions.create(
239
-
model="gpt-4o",
240
-
messages=[
241
-
{
242
-
"role": "user",
243
-
"content": "How do I output all files in a directory using Python?",
244
-
},
245
-
],
246
-
)
247
-
print(completion.choices[0].message.content)
187
+
asyncio.run(main())
248
188
```
249
189
250
-
The API is the exact same as the standard client instance-based API.
251
-
252
-
This is intended to be used within REPLs or notebooks for faster iteration, **not** in application code.
253
-
254
-
We recommend that you always instantiate a client (e.g., with `client = OpenAI()`) in application code because:
255
-
256
-
- It can be difficult to reason about where client options are configured
257
-
- It's not possible to change certain client options without potentially causing race conditions
258
-
- It's harder to mock for testing purposes
259
-
- It's not possible to control cleanup of network connections
260
-
261
190
## Realtime API beta
262
191
263
192
The Realtime API enables you to build low-latency, multi-modal conversational experiences. It currently supports text and audio as both input and output, as well as [function calling](https://platform.openai.com/docs/guides/function-calling) through a WebSocket connection.
@@ -304,7 +233,7 @@ However the real magic of the Realtime API is handling audio inputs / outputs, s
304
233
305
234
### Realtime error handling
306
235
307
-
Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as *no errors are raised directly* by the SDK when an `error` event comes in.
236
+
Whenever an error occurs, the Realtime API will send an [`error` event](https://platform.openai.com/docs/guides/realtime-model-capabilities#error-handling) and the connection will stay open and remain usable. This means you need to handle it yourself, as _no errors are raised directly_ by the SDK when an `error` event comes in.
308
237
309
238
```py
310
239
client = AsyncOpenAI()
@@ -408,11 +337,11 @@ from openai import OpenAI
408
337
409
338
client = OpenAI()
410
339
411
-
completion= client.chat.completions.create(
412
-
messages=[
340
+
response= client.chat.responses.create(
341
+
input=[
413
342
{
414
343
"role": "user",
415
-
"content": "Can you generate an example json object describing a fruit?",
344
+
"content": "How much ?",
416
345
}
417
346
],
418
347
model="gpt-4o",
@@ -489,15 +418,16 @@ Error codes are as follows:
489
418
All object responses in the SDK provide a `_request_id` property which is added from the `x-request-id` response header so that you can quickly log failing requests and report them back to OpenAI.
490
419
491
420
```python
492
-
completion =await client.chat.completions.create(
493
-
messages=[{"role": "user", "content": "Say this is a test"}], model="gpt-4"
421
+
response =await client.responses.create(
422
+
model="gpt-4o-mini",
423
+
input="Say 'this is a test'.",
494
424
)
495
-
print(completion._request_id) # req_123
425
+
print(response._request_id) # req_123
496
426
```
497
427
498
428
Note that unlike other properties that use an `_` prefix, the `_request_id` property
499
-
*is* public. Unless documented otherwise, *all* other `_` prefix properties,
500
-
methods and modules are *private*.
429
+
_is_ public. Unless documented otherwise, _all_ other `_` prefix properties,
430
+
methods and modules are _private_.
501
431
502
432
> [!IMPORTANT]
503
433
> If you need to access request IDs for failed requests you must catch the `APIStatusError` exception
@@ -514,8 +444,7 @@ except openai.APIStatusError as exc:
514
444
raise exc
515
445
```
516
446
517
-
518
-
### Retries
447
+
## Retries
519
448
520
449
Certain errors are automatically retried 2 times by default, with a short exponential backoff.
521
450
Connection errors (for example, due to a network connectivity problem), 408 Request Timeout, 409 Conflict,
0 commit comments