You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: examples/whisper/README.md
+28-11
Original file line number
Diff line number
Diff line change
@@ -2,24 +2,41 @@
2
2
3
3
[Whisper](https://github.com/openai/whisper) is a speech-to-text model from OpenAI. It ordinarily requires 30s of input data for transcription, making it challenging to use in real-time applications. We work around this by limitation by padding shorter bursts of speech with silent audio packets.
4
4
5
-
## Run the demo
5
+
## How to run the demo
6
6
7
-
Change the URL and TOKEN inside the script
7
+
### Step 1:
8
+
Change the URL and TOKEN inside the whisper.py script to use your LiveKit websocket URL and a valid session token
8
9
9
-
Clone whisper.cpp inside this directory
10
-
11
-
### Build a shared lib:
10
+
### Step 2:
11
+
Clone [whisper.cpp](https://github.com/ggerganov/whisper.cpp) inside this directory
Connect another participant to the room and publish a microphone stream. To do this quickly, you can use our [Meet example](https://meet.livekit.io/?tab=custom) or use the [livekit-cli](https://github.com/livekit/livekit-cli):
0 commit comments