Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: text-only mode returning empty responses #33

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 17 additions & 7 deletions src/app/App.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -245,18 +245,28 @@ function App() {
const instructions = currentAgent?.instructions || "";
const tools = currentAgent?.tools || [];

const sessionUpdateEvent = {
type: "session.update",
session: {
modalities: ["text", "audio"],
instructions,

const sessionConfig = {
modalities: ["text"],
instructions,
tools,
};
Copy link

@paix26875 paix26875 Mar 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AbrahamNobleOX

Thank you for your work on this PR! I really appreciate it.
I found one point I’d like to mention in this PR.

The isAudioPlaybackEnabled boolean value is used to distinguish between the AI’s audio mode and text-only mode. However, it seems that the user’s audio mode is also being turned off.

I think it would be better if the user’s audio mode remained available, regardless of the isAudioPlaybackEnabled setting.

Let me know what you think! 😊

Suggested change
const sessionConfig = {
modalities: ["text"],
instructions,
tools,
};
const sessionConfig = {
modalities: ["text"],
input_audio_format: "pcm16",
input_audio_transcription: { model: "whisper-1" },
instructions,
tools,
turn_detection: turnDetection,
};

Summarizing the changes, the updated code would look like this:

    const sessionConfig = {
      modalities: ["text"],
      input_audio_format: "pcm16",
      input_audio_transcription: { model: "whisper-1" },
      instructions,
      tools,
      turn_detection: turnDetection,
    };


    if (isAudioPlaybackEnabled) {
      sessionConfig.modalities.push("audio"); // Changing this to "text" will disable audio
      Object.assign(sessionConfig, {
        voice: "coral",
        output_audio_format: "pcm16",
      });
    }

    const sessionUpdateEvent = {
      type: "session.update",
      session: sessionConfig,
    }

Copy link
Author

@AbrahamNobleOX AbrahamNobleOX Mar 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you mean that the agent doesn't listen, then I just tested again and it works really well.

The issue I think you may be having is that "push to talk" is checked, which is like a mute button and stops the agent from listening to your voice.

Make sure it is unchecked. See the attached image:

Screenshot 2025-03-02 at 01 52 36

Copy link

@paix26875 paix26875 Mar 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AbrahamNobleOX
Thanks for your response! I really appreciate the discussion.

What I meant to say is that isAudioPlaybackEnabled should be used only to control the AI's audio mode and should be independent of user audio input and Whisper transcription.

Also, Push to Talk isn’t actually a mute button—it controls turn detection. When enabled, the user’s voice is recognized only while the Talk button is pressed.

Your PR does provide more flexibility in toggling audio mode, which is great! However, I think that turning off isAudioPlaybackEnabled also disables Whisper transcription and turn detection based on the Push-to-Talk boolean value, which might not be the intended behavior.

Apologies if I misunderstood anything, and I’d love to hear your thoughts! 😊

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paix26875
Great explanation.

"Mute" wasn't the best choice of word but it acts as such, hence why I called it that 😊 Sorry for calling that, but I understand it's purpose.

Currently trying to implement your code suggestion and will push an update when I test and it fits.

Thank you!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@paix26875

I've tested your code suggestion and it works great.
I've also pushed the update accordingly.

Thank you for pointing that out! 😊

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for taking the time to understand my point and incorporate the changes! I appreciate it. Great work! 🚀



if (isAudioPlaybackEnabled) {
sessionConfig.modalities.push("audio"); // Changing this to "text" will disable audio
Object.assign(sessionConfig, {
voice: "coral",
input_audio_format: "pcm16",
output_audio_format: "pcm16",
input_audio_transcription: { model: "whisper-1" },
turn_detection: turnDetection,
tools,
},
});
}

const sessionUpdateEvent = {
type: "session.update",
session: sessionConfig,
};

sendClientEvent(sessionUpdateEvent);
Expand Down
10 changes: 10 additions & 0 deletions src/app/hooks/useHandleServerEvent.ts
Original file line number Diff line number Diff line change
Expand Up @@ -139,6 +139,16 @@ export function useHandleServerEvent({
break;
}

case "conversation.item.text.delta":
case "response.text.delta": {
const itemId = serverEvent.item_id;
const deltaText = serverEvent.delta || "";
if (itemId) {
updateTranscriptMessage(itemId, deltaText, true);
}
break;
}

case "conversation.item.input_audio_transcription.completed": {
const itemId = serverEvent.item_id;
const finalTranscript =
Expand Down