Open
Conversation
Updates documentation for pipecat PR #4141: - Document new WebSocket-based OpenAIResponsesLLMService as the default - Add documentation for OpenAIResponsesHttpLLMService (HTTP variant) - Add "WebSocket vs HTTP" section explaining when to use each - Update Configuration section with ws_url parameter - Update Usage examples to show both variants - Add notes about persistent WebSocket connections and incremental context optimization - Clarify that both variants have identical constructor args and settings Breaking change: OpenAIResponsesLLMService now uses WebSocket transport. Users who need HTTP streaming should use OpenAIResponsesHttpLLMService.
|
Preview deployment for your docs. Learn more about Mintlify Previews.
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Automated documentation update for pipecat PR #4141.
Changes
Updated
server/services/llm/openai-responses.mdx:OpenAIResponsesLLMServiceas the default implementationOpenAIResponsesHttpLLMService(HTTP variant)ws_urlparameter for WebSocket variantprevious_response_id, and the connection-local cacheGaps identified
None - the OpenAI Responses service already had a dedicated documentation page at
server/services/llm/openai-responses.mdx.Breaking change noted
The documentation now clearly states that
OpenAIResponsesLLMServiceuses WebSocket transport by default. Users who need HTTP streaming behavior should switch toOpenAIResponsesHttpLLMService.