Skip to content

Support Streaming Audio Data via recognizeUsingWebSocket #1000

@jeff-arn

Description

@jeff-arn

When you open an issue for a feature request, please add as much detail as possible:

Currently, the interface exposed in SpeechToTextV1/SpeechToText+Recognize.swift only leaves a SpeechToTextSession alive for the time that it takes to transcribe a Data blob.

We should add support to send smaller chunks of data in realtime as a part of one session, to support streaming audio applications that are not driven via the microphone.

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions