Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 1.66.5 #2223

Merged
merged 12 commits into from
Mar 18, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions .github/workflows/create-releases.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Create releases
on:
schedule:
- cron: '0 5 * * *' # every day at 5am UTC
push:
branches:
- main

jobs:
release:
name: release
if: github.ref == 'refs/heads/main' && github.repository == 'openai/openai-python'
runs-on: ubuntu-latest
environment: publish

steps:
- uses: actions/checkout@v4

- uses: stainless-api/trigger-release-please@v1
id: release
with:
repo: ${{ github.event.repository.full_name }}
stainless-api-key: ${{ secrets.STAINLESS_API_KEY }}

- name: Install Rye
if: ${{ steps.release.outputs.releases_created }}
run: |
curl -sSf https://rye.astral.sh/get | bash
echo "$HOME/.rye/shims" >> $GITHUB_PATH
env:
RYE_VERSION: '0.44.0'
RYE_INSTALL_OPTION: '--yes'

- name: Publish to PyPI
if: ${{ steps.release.outputs.releases_created }}
run: |
bash ./bin/publish-pypi
env:
PYPI_TOKEN: ${{ secrets.OPENAI_PYPI_TOKEN || secrets.PYPI_TOKEN }}
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.66.4"
".": "1.66.5"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 81
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-c8579861bc21d4d2155a5b9e8e7d54faee8083730673c4d32cbbe573d7fb4116.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-f3bce04386c4fcfd5037e0477fbaa39010003fd1558eb5185fe4a71dd6a05fdd.yml
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 1.66.5 (2025-03-18)

Full Changelog: [v1.66.4...v1.66.5](https://github.com/openai/openai-python/compare/v1.66.4...v1.66.5)

### Bug Fixes

* **types:** improve responses type names ([#2224](https://github.com/openai/openai-python/issues/2224)) ([5f7beb8](https://github.com/openai/openai-python/commit/5f7beb873af5ccef2551f34ab3ef098e099ce9c6))


### Chores

* **internal:** add back releases workflow ([c71d4c9](https://github.com/openai/openai-python/commit/c71d4c918eab3532b36ea944b0c4069db6ac2d38))
* **internal:** codegen related update ([#2222](https://github.com/openai/openai-python/issues/2222)) ([f570d91](https://github.com/openai/openai-python/commit/f570d914a16cb5092533e32dfd863027d378c0b5))

## 1.66.4 (2025-03-17)

Full Changelog: [v1.66.3...v1.66.4](https://github.com/openai/openai-python/compare/v1.66.3...v1.66.4)
8 changes: 7 additions & 1 deletion api.md
Original file line number Diff line number Diff line change
@@ -605,6 +605,8 @@ from openai.types.responses import (
ResponseCodeInterpreterToolCall,
ResponseCompletedEvent,
ResponseComputerToolCall,
ResponseComputerToolCallOutputItem,
ResponseComputerToolCallOutputScreenshot,
ResponseContent,
ResponseContentPartAddedEvent,
ResponseContentPartDoneEvent,
@@ -621,6 +623,8 @@ from openai.types.responses import (
ResponseFunctionCallArgumentsDeltaEvent,
ResponseFunctionCallArgumentsDoneEvent,
ResponseFunctionToolCall,
ResponseFunctionToolCallItem,
ResponseFunctionToolCallOutputItem,
ResponseFunctionWebSearch,
ResponseInProgressEvent,
ResponseIncludable,
@@ -632,7 +636,9 @@ from openai.types.responses import (
ResponseInputImage,
ResponseInputItem,
ResponseInputMessageContentList,
ResponseInputMessageItem,
ResponseInputText,
ResponseItem,
ResponseOutputAudio,
ResponseOutputItem,
ResponseOutputItemAddedEvent,
@@ -677,4 +683,4 @@ from openai.types.responses import ResponseItemList

Methods:

- <code title="get /responses/{response_id}/input_items">client.responses.input_items.<a href="./src/openai/resources/responses/input_items.py">list</a>(response_id, \*\*<a href="src/openai/types/responses/input_item_list_params.py">params</a>) -> SyncCursorPage[Data]</code>
- <code title="get /responses/{response_id}/input_items">client.responses.input_items.<a href="./src/openai/resources/responses/input_items.py">list</a>(response_id, \*\*<a href="src/openai/types/responses/input_item_list_params.py">params</a>) -> <a href="./src/openai/types/responses/response_item.py">SyncCursorPage[ResponseItem]</a></code>
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.66.4"
version = "1.66.5"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.66.4" # x-release-please-version
__version__ = "1.66.5" # x-release-please-version
16 changes: 8 additions & 8 deletions src/openai/resources/batches.py
Original file line number Diff line number Diff line change
@@ -49,7 +49,7 @@ def create(
self,
*,
completion_window: Literal["24h"],
endpoint: Literal["/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
input_file_id: str,
metadata: Optional[Metadata] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
@@ -67,9 +67,9 @@ def create(
is supported.

endpoint: The endpoint to be used for all requests in the batch. Currently
`/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
embedding inputs across all requests in the batch.
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
are supported. Note that `/v1/embeddings` batches are also restricted to a
maximum of 50,000 embedding inputs across all requests in the batch.

input_file_id: The ID of an uploaded file that contains requests for the new batch.

@@ -259,7 +259,7 @@ async def create(
self,
*,
completion_window: Literal["24h"],
endpoint: Literal["/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
input_file_id: str,
metadata: Optional[Metadata] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
@@ -277,9 +277,9 @@ async def create(
is supported.

endpoint: The endpoint to be used for all requests in the batch. Currently
`/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
embedding inputs across all requests in the batch.
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
are supported. Note that `/v1/embeddings` batches are also restricted to a
maximum of 50,000 embedding inputs across all requests in the batch.

input_file_id: The ID of an uploaded file that contains requests for the new batch.

14 changes: 7 additions & 7 deletions src/openai/resources/responses/input_items.py
Original file line number Diff line number Diff line change
@@ -16,7 +16,7 @@
from ...pagination import SyncCursorPage, AsyncCursorPage
from ..._base_client import AsyncPaginator, make_request_options
from ...types.responses import input_item_list_params
from ...types.responses.response_item_list import Data
from ...types.responses.response_item import ResponseItem

__all__ = ["InputItems", "AsyncInputItems"]

@@ -55,7 +55,7 @@ def list(
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> SyncCursorPage[Data]:
) -> SyncCursorPage[ResponseItem]:
"""
Returns a list of input items for a given response.

@@ -84,7 +84,7 @@ def list(
raise ValueError(f"Expected a non-empty value for `response_id` but received {response_id!r}")
return self._get_api_list(
f"/responses/{response_id}/input_items",
page=SyncCursorPage[Data],
page=SyncCursorPage[ResponseItem],
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
@@ -100,7 +100,7 @@ def list(
input_item_list_params.InputItemListParams,
),
),
model=cast(Any, Data), # Union types cannot be passed in as arguments in the type system
model=cast(Any, ResponseItem), # Union types cannot be passed in as arguments in the type system
)


@@ -138,7 +138,7 @@ def list(
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> AsyncPaginator[Data, AsyncCursorPage[Data]]:
) -> AsyncPaginator[ResponseItem, AsyncCursorPage[ResponseItem]]:
"""
Returns a list of input items for a given response.

@@ -167,7 +167,7 @@ def list(
raise ValueError(f"Expected a non-empty value for `response_id` but received {response_id!r}")
return self._get_api_list(
f"/responses/{response_id}/input_items",
page=AsyncCursorPage[Data],
page=AsyncCursorPage[ResponseItem],
options=make_request_options(
extra_headers=extra_headers,
extra_query=extra_query,
@@ -183,7 +183,7 @@ def list(
input_item_list_params.InputItemListParams,
),
),
model=cast(Any, Data), # Union types cannot be passed in as arguments in the type system
model=cast(Any, ResponseItem), # Union types cannot be passed in as arguments in the type system
)


9 changes: 5 additions & 4 deletions src/openai/types/batch_create_params.py
Original file line number Diff line number Diff line change
@@ -17,12 +17,13 @@ class BatchCreateParams(TypedDict, total=False):
Currently only `24h` is supported.
"""

endpoint: Required[Literal["/v1/chat/completions", "/v1/embeddings", "/v1/completions"]]
endpoint: Required[Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"]]
"""The endpoint to be used for all requests in the batch.

Currently `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are
supported. Note that `/v1/embeddings` batches are also restricted to a maximum
of 50,000 embedding inputs across all requests in the batch.
Currently `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and
`/v1/completions` are supported. Note that `/v1/embeddings` batches are also
restricted to a maximum of 50,000 embedding inputs across all requests in the
batch.
"""

input_file_id: Required[str]
7 changes: 5 additions & 2 deletions src/openai/types/chat/chat_completion_chunk.py
Original file line number Diff line number Diff line change
@@ -142,6 +142,9 @@ class ChatCompletionChunk(BaseModel):
"""
An optional field that will only be present when you set
`stream_options: {"include_usage": true}` in your request. When present, it
contains a null value except for the last chunk which contains the token usage
statistics for the entire request.
contains a null value **except for the last chunk** which contains the token
usage statistics for the entire request.

**NOTE:** If the stream is interrupted or cancelled, you may not receive the
final usage chunk which contains the total token usage for the request.
"""
Original file line number Diff line number Diff line change
@@ -22,7 +22,7 @@ class FileFile(TypedDict, total=False):
file_id: str
"""The ID of an uploaded file to use as input."""

file_name: str
filename: str
"""The name of the file, used when passing the file to the model as a string."""


Original file line number Diff line number Diff line change
@@ -12,6 +12,9 @@ class ChatCompletionStreamOptionsParam(TypedDict, total=False):
"""If set, an additional chunk will be streamed before the `data: [DONE]` message.

The `usage` field on this chunk shows the token usage statistics for the entire
request, and the `choices` field will always be an empty array. All other chunks
will also include a `usage` field, but with a null value.
request, and the `choices` field will always be an empty array.

All other chunks will also include a `usage` field, but with a null value.
**NOTE:** If the stream is interrupted, you may not receive the final usage
chunk which contains the total token usage for the request.
"""
15 changes: 15 additions & 0 deletions src/openai/types/responses/__init__.py
Original file line number Diff line number Diff line change
@@ -7,6 +7,7 @@
from .tool_param import ToolParam as ToolParam
from .computer_tool import ComputerTool as ComputerTool
from .function_tool import FunctionTool as FunctionTool
from .response_item import ResponseItem as ResponseItem
from .response_error import ResponseError as ResponseError
from .response_usage import ResponseUsage as ResponseUsage
from .parsed_response import (
@@ -66,6 +67,7 @@
from .response_computer_tool_call import ResponseComputerToolCall as ResponseComputerToolCall
from .response_format_text_config import ResponseFormatTextConfig as ResponseFormatTextConfig
from .response_function_tool_call import ResponseFunctionToolCall as ResponseFunctionToolCall
from .response_input_message_item import ResponseInputMessageItem as ResponseInputMessageItem
from .response_refusal_done_event import ResponseRefusalDoneEvent as ResponseRefusalDoneEvent
from .response_function_web_search import ResponseFunctionWebSearch as ResponseFunctionWebSearch
from .response_input_content_param import ResponseInputContentParam as ResponseInputContentParam
@@ -76,6 +78,7 @@
from .response_file_search_tool_call import ResponseFileSearchToolCall as ResponseFileSearchToolCall
from .response_output_item_done_event import ResponseOutputItemDoneEvent as ResponseOutputItemDoneEvent
from .response_content_part_done_event import ResponseContentPartDoneEvent as ResponseContentPartDoneEvent
from .response_function_tool_call_item import ResponseFunctionToolCallItem as ResponseFunctionToolCallItem
from .response_output_item_added_event import ResponseOutputItemAddedEvent as ResponseOutputItemAddedEvent
from .response_computer_tool_call_param import ResponseComputerToolCallParam as ResponseComputerToolCallParam
from .response_content_part_added_event import ResponseContentPartAddedEvent as ResponseContentPartAddedEvent
@@ -90,9 +93,15 @@
from .response_audio_transcript_delta_event import (
ResponseAudioTranscriptDeltaEvent as ResponseAudioTranscriptDeltaEvent,
)
from .response_computer_tool_call_output_item import (
ResponseComputerToolCallOutputItem as ResponseComputerToolCallOutputItem,
)
from .response_format_text_json_schema_config import (
ResponseFormatTextJSONSchemaConfig as ResponseFormatTextJSONSchemaConfig,
)
from .response_function_tool_call_output_item import (
ResponseFunctionToolCallOutputItem as ResponseFunctionToolCallOutputItem,
)
from .response_web_search_call_completed_event import (
ResponseWebSearchCallCompletedEvent as ResponseWebSearchCallCompletedEvent,
)
@@ -120,6 +129,9 @@
from .response_function_call_arguments_delta_event import (
ResponseFunctionCallArgumentsDeltaEvent as ResponseFunctionCallArgumentsDeltaEvent,
)
from .response_computer_tool_call_output_screenshot import (
ResponseComputerToolCallOutputScreenshot as ResponseComputerToolCallOutputScreenshot,
)
from .response_format_text_json_schema_config_param import (
ResponseFormatTextJSONSchemaConfigParam as ResponseFormatTextJSONSchemaConfigParam,
)
@@ -138,3 +150,6 @@
from .response_code_interpreter_call_interpreting_event import (
ResponseCodeInterpreterCallInterpretingEvent as ResponseCodeInterpreterCallInterpretingEvent,
)
from .response_computer_tool_call_output_screenshot_param import (
ResponseComputerToolCallOutputScreenshotParam as ResponseComputerToolCallOutputScreenshotParam,
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

from typing import List, Optional
from typing_extensions import Literal

from ..._models import BaseModel
from .response_computer_tool_call_output_screenshot import ResponseComputerToolCallOutputScreenshot

__all__ = ["ResponseComputerToolCallOutputItem", "AcknowledgedSafetyCheck"]


class AcknowledgedSafetyCheck(BaseModel):
id: str
"""The ID of the pending safety check."""

code: str
"""The type of the pending safety check."""

message: str
"""Details about the pending safety check."""


class ResponseComputerToolCallOutputItem(BaseModel):
id: str
"""The unique ID of the computer call tool output."""

call_id: str
"""The ID of the computer tool call that produced the output."""

output: ResponseComputerToolCallOutputScreenshot
"""A computer screenshot image used with the computer use tool."""

type: Literal["computer_call_output"]
"""The type of the computer tool call output. Always `computer_call_output`."""

acknowledged_safety_checks: Optional[List[AcknowledgedSafetyCheck]] = None
"""
The safety checks reported by the API that have been acknowledged by the
developer.
"""

status: Optional[Literal["in_progress", "completed", "incomplete"]] = None
"""The status of the message input.
One of `in_progress`, `completed`, or `incomplete`. Populated when input items
are returned via API.
"""
Loading