Skip to content

ERROR [Error: UndefinedErrorMultilingualConfiguration] #462

@YashYash

Description

@YashYash

Description

Thanks for creating this package! Can't wait to get this working and productionize it. Running into the issue below.

Testing an expo dev client app.
Tested on an ios simulator . On device I got a different error. I created a separate issue for that.

Package versions

"react-native-executorch": "^0.4.7"
"react-native": "0.79.5"
"expo": "~53.0.13"
"expo-dev-client": "~5.2.2"
"@speechmatics/expo-two-way-audio": "^0.1.2" // using this library since it supports echo cancellation

I have been spending some time trying to debug this error. My current flow is:

  1. Turn on the microphone to start recording
  2. Call streamingTranscribe(STREAMING_ACTION.START)
  3. on each mic recording chunk, I call onChunk which calls: streamingTranscribe(STREAMING_ACTION.DATA,
    Array.from(data),
    SpeechToTextLanguage.English,
    )

The streamTranscribeError error below is:

[Error: UndefinedErrorModuleNotLoaded]

This prints our repetitively when trying to transcribe each chunk.

The other error I am getting whenever the app hot reloads is:

[Error: UndefinedErrorUndefinedErrorThe package 'react-native-executorch' doesn't seem to be linked. Make sure:

- You have run 'pod install'
- You rebuilt the app after installing the package
- You are not using Expo Go
]

Here is the hook I am working on creating:

import { useCallback } from 'react'
import { Platform, PermissionsAndroid } from 'react-native'
import {
  SpeechToTextLanguage,
  STREAMING_ACTION,
  useSpeechToText,
} from 'react-native-executorch'
import {
  initialize as initializeTwoWayAudio,
  setMicrophoneModeIOS,
  toggleRecording,
  useExpoTwoWayAudioEventListener,
  useIsRecording,
} from '@speechmatics/expo-two-way-audio'

export type SpeechToTextOptions = {
  onSegment?: (text: string) => void
  onError?: (err: Error) => void
  onReady?: () => void
  language?: string
  streamingConfig?: "fast" | "balanced" | "quality" | undefined
}

type MicrophoneDataEvent = { data: Uint8Array }

export function useSpeechTranscribe({
  onSegment,
  onError,
  language = 'en',
  streamingConfig = 'balanced',
}: SpeechToTextOptions = {}) {
  const {
    streamingTranscribe,
    isGenerating,
    configureStreaming,
    error: streamTranscribeError,
    sequence,
    isReady,
    isGenerating: isLoading,
    downloadProgress,
  } = useSpeechToText({
    modelName: 'whisper',
    windowSize: 3,
    overlapSeconds: 1.2,
    streamingConfig,
  })
  const isListening = useIsRecording()
  if (streamTranscribeError) {
    console.log('#### STREAMING TRANSCRIBE ERROR', streamTranscribeError)
  }

  // Request microphone permission
  const requestMicrophonePermission = useCallback(async () => {
    if (Platform.OS === 'android') {
      const granted = await PermissionsAndroid.request(
        PermissionsAndroid.PERMISSIONS.RECORD_AUDIO,
        {
          title: 'Microphone Permission',
          message: '*** needs access to your microphone to enable *** voice mode',
          buttonNeutral: 'Ask Me Later',
          buttonNegative: 'Cancel',
          buttonPositive: 'OK',
        }
      )
      if (granted !== PermissionsAndroid.RESULTS.GRANTED) {
        throw new Error('Microphone permission not granted')
      }
    }
  }, [])

  const onChunk = (data: number[]) => {
    streamingTranscribe(
      STREAMING_ACTION.DATA,
      data,
      SpeechToTextLanguage.English,
    )
  }

  useExpoTwoWayAudioEventListener('onMicrophoneData', async (
    event: MicrophoneDataEvent,
  ) => {
    if (event && event.data) {
      const uint8 = event.data
      const int16 = new Int16Array(uint8.buffer, uint8.byteOffset, uint8.byteLength / 2)
      const floatArray = Array.from(int16, v => v / 32768)
      onChunk(floatArray)
    }
  })

  const startListening = useCallback(async () => {
    if (!isReady) return
    try {
      await requestMicrophonePermission()
      await initializeTwoWayAudio()
      streamingTranscribe(STREAMING_ACTION.START)
      toggleRecording(true)
    } catch (err: any) {
      onError?.(err instanceof Error ? err : new Error(String(err)))
    }
  }, [requestMicrophonePermission, onError, language, onSegment])

  const sitMicMode = useCallback(() => {
    setMicrophoneModeIOS()
  }, [])

  const stopListening = useCallback(() => {
    toggleRecording(false)
    streamingTranscribe(
      STREAMING_ACTION.STOP,
      undefined,
      SpeechToTextLanguage.English,
    )
  }, [])

  // Nothing logging here.
  console.log(sequence)

  return {
    startListening,
    stopListening,
    sitMicMode,
    configureStreaming,
    isReady,
    isGenerating,
    isLoading,
    downloadProgress,
    sequence,
    streamTranscribeError,
    isListening,
  }
}

Thanks!

Steps to reproduce

  1. Turn on the microphone to start recording
  2. Call streamingTranscribe(STREAMING_ACTION.START)
  3. on each mic recording chunk, I call onChunk which calls: streamingTranscribe(STREAMING_ACTION.DATA,
    Array.from(data),
    SpeechToTextLanguage.English,
    )

Snack or a link to a repository

No response

React Native Executorch version

^0.4.7

React Native version

0.79.5

Platforms

iOS

JavaScript runtime

Hermes

Workflow

Expo Dev Client

Architecture

Fabric (New Architecture)

Build type

Debug mode

Device

Real device

Device model

iPhone 15 Pro Max

AI model

whisper

Performance logs

No response

Acknowledgements

Yes

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions