Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.107.2"
".": "1.107.3"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 118
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-94b1e3cb0bdc616ff0c2f267c33dadd95f133b1f64e647aab6c64afb292b2793.yml
openapi_spec_hash: 2395319ac9befd59b6536ae7f9564a05
config_hash: 930dac3aa861344867e4ac84f037b5df
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d30ff992a48873c1466c49f3c01f2ec8933faebff23424748f8d056065b1bcef.yml
openapi_spec_hash: e933ec43b46f45c348adb78840e5808d
config_hash: bf45940f0a7805b4ec2017eecdd36893
9 changes: 9 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
# Changelog

## 1.107.3 (2025-09-15)

Full Changelog: [v1.107.2...v1.107.3](https://github.com/openai/openai-python/compare/v1.107.2...v1.107.3)

### Chores

* **api:** docs and spec refactoring ([9bab5da](https://github.com/openai/openai-python/commit/9bab5da1802c3575c58e73ed1470dd5fa61fd1d2))
* **tests:** simplify `get_platform` test ([0b1f6a2](https://github.com/openai/openai-python/commit/0b1f6a28d5a59e10873264e976d2e332903eef29))

## 1.107.2 (2025-09-12)

Full Changelog: [v1.107.1...v1.107.2](https://github.com/openai/openai-python/compare/v1.107.1...v1.107.2)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.107.2"
version = "1.107.3"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.107.2" # x-release-please-version
__version__ = "1.107.3" # x-release-please-version
16 changes: 10 additions & 6 deletions src/openai/resources/chat/completions/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1300,10 +1300,12 @@ def list(

limit: Number of Chat Completions to retrieve.

metadata:
A list of metadata keys to filter the Chat Completions by. Example:
metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful
for storing additional information about the object in a structured format, and
querying for objects via API or the dashboard.

`metadata[key1]=value1&metadata[key2]=value2`
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

model: The model used to generate the Chat Completions.

Expand Down Expand Up @@ -2736,10 +2738,12 @@ def list(

limit: Number of Chat Completions to retrieve.

metadata:
A list of metadata keys to filter the Chat Completions by. Example:
metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful
for storing additional information about the object in a structured format, and
querying for objects via API or the dashboard.

`metadata[key1]=value1&metadata[key2]=value2`
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

model: The model used to generate the Chat Completions.

Expand Down
16 changes: 12 additions & 4 deletions src/openai/resources/conversations/conversations.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,12 @@ def create(
items: Initial items to include in the conversation context. You may add up to 20 items
at a time.

metadata: Set of 16 key-value pairs that can be attached to an object. Useful for storing
additional information about the object in a structured format.
metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful
for storing additional information about the object in a structured format, and
querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

extra_headers: Send extra headers

Expand Down Expand Up @@ -250,8 +254,12 @@ async def create(
items: Initial items to include in the conversation context. You may add up to 20 items
at a time.

metadata: Set of 16 key-value pairs that can be attached to an object. Useful for storing
additional information about the object in a structured format.
metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful
for storing additional information about the object in a structured format, and
querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

extra_headers: Send extra headers

Expand Down
12 changes: 6 additions & 6 deletions src/openai/types/audio/transcription_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,12 +43,12 @@ class TranscriptionCreateParamsBase(TypedDict, total=False):
"""

include: List[TranscriptionInclude]
"""Additional information to include in the transcription response.
`logprobs` will return the log probabilities of the tokens in the response to
understand the model's confidence in the transcription. `logprobs` only works
with response_format set to `json` and only with the models `gpt-4o-transcribe`
and `gpt-4o-mini-transcribe`.
"""
Additional information to include in the transcription response. `logprobs` will
return the log probabilities of the tokens in the response to understand the
model's confidence in the transcription. `logprobs` only works with
response_format set to `json` and only with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`.
"""

language: str
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,8 @@ class ChatCompletionAssistantMessageParam(TypedDict, total=False):
"""The role of the messages author, in this case `assistant`."""

audio: Optional[Audio]
"""Data about a previous audio response from the model.

"""
Data about a previous audio response from the model.
[Learn more](https://platform.openai.com/docs/guides/audio).
"""

Expand Down
8 changes: 6 additions & 2 deletions src/openai/types/chat/completion_list_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,13 @@ class CompletionListParams(TypedDict, total=False):
"""Number of Chat Completions to retrieve."""

metadata: Optional[Metadata]
"""A list of metadata keys to filter the Chat Completions by. Example:
"""Set of 16 key-value pairs that can be attached to an object.
`metadata[key1]=value1&metadata[key2]=value2`
This can be useful for storing additional information about the object in a
structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.
"""

model: str
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ class ConversationCreateParams(TypedDict, total=False):
metadata: Optional[Metadata]
"""Set of 16 key-value pairs that can be attached to an object.
Useful for storing additional information about the object in a structured
format.
This can be useful for storing additional information about the object in a
structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.
"""
9 changes: 6 additions & 3 deletions src/openai/types/evals/run_cancel_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,12 @@ class DataSourceResponsesSourceResponses(BaseModel):
"""

reasoning_effort: Optional[ReasoningEffort] = None
"""Optional reasoning effort parameter.
This is a query parameter used to select responses.
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""

temperature: Optional[float] = None
Expand Down
9 changes: 6 additions & 3 deletions src/openai/types/evals/run_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,12 @@ class DataSourceCreateEvalResponsesRunDataSourceSourceResponses(TypedDict, total
"""

reasoning_effort: Optional[ReasoningEffort]
"""Optional reasoning effort parameter.
This is a query parameter used to select responses.
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""

temperature: Optional[float]
Expand Down
9 changes: 6 additions & 3 deletions src/openai/types/evals/run_create_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,12 @@ class DataSourceResponsesSourceResponses(BaseModel):
"""

reasoning_effort: Optional[ReasoningEffort] = None
"""Optional reasoning effort parameter.
This is a query parameter used to select responses.
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""

temperature: Optional[float] = None
Expand Down
9 changes: 6 additions & 3 deletions src/openai/types/evals/run_list_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,12 @@ class DataSourceResponsesSourceResponses(BaseModel):
"""

reasoning_effort: Optional[ReasoningEffort] = None
"""Optional reasoning effort parameter.
This is a query parameter used to select responses.
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""

temperature: Optional[float] = None
Expand Down
9 changes: 6 additions & 3 deletions src/openai/types/evals/run_retrieve_response.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,9 +100,12 @@ class DataSourceResponsesSourceResponses(BaseModel):
"""

reasoning_effort: Optional[ReasoningEffort] = None
"""Optional reasoning effort parameter.
This is a query parameter used to select responses.
"""
Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
"""

temperature: Optional[float] = None
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ class RealtimeResponseCreateParams(BaseModel):
"""

prompt: Optional[ResponsePrompt] = None
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ class RealtimeResponseCreateParamsParam(TypedDict, total=False):
"""

prompt: Optional[ResponsePromptParam]
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ class RealtimeSessionCreateRequest(BaseModel):
"""

prompt: Optional[ResponsePrompt] = None
"""Reference to a prompt template and its variables.

"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ class RealtimeSessionCreateRequestParam(TypedDict, total=False):
"""

prompt: Optional[ResponsePromptParam]
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -429,8 +429,8 @@ class RealtimeSessionCreateResponse(BaseModel):
"""

prompt: Optional[ResponsePrompt] = None
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
4 changes: 2 additions & 2 deletions src/openai/types/responses/response.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,8 +180,8 @@ class Response(BaseModel):
"""

prompt: Optional[ResponsePrompt] = None
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,9 +39,9 @@ class ResponseCodeInterpreterToolCall(BaseModel):
"""The ID of the container used to run the code."""

outputs: Optional[List[Output]] = None
"""The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
"""
The outputs generated by the code interpreter, such as logs or images. Can be
null if no outputs are available.
"""

status: Literal["in_progress", "completed", "incomplete", "interpreting", "failed"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,9 @@ class ResponseCodeInterpreterToolCallParam(TypedDict, total=False):
"""The ID of the container used to run the code."""

outputs: Required[Optional[Iterable[Output]]]
"""The outputs generated by the code interpreter, such as logs or images.
Can be null if no outputs are available.
"""
The outputs generated by the code interpreter, such as logs or images. Can be
null if no outputs are available.
"""

status: Required[Literal["in_progress", "completed", "incomplete", "interpreting", "failed"]]
Expand Down
4 changes: 2 additions & 2 deletions src/openai/types/responses/response_create_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,8 +134,8 @@ class ResponseCreateParamsBase(TypedDict, total=False):
"""

prompt: Optional[ResponsePromptParam]
"""Reference to a prompt template and its variables.
"""
Reference to a prompt template and its variables.
[Learn more](https://platform.openai.com/docs/guides/text?api-mode=responses#reusable-prompts).
"""

Expand Down