-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speech-To-Text Module #2
Comments
I just set up and tried out DeepSpeech, it's pretty darn cool and pretty much works out of the box! awesome find, Michael |
Some preliminary testing shows that the STT module was running with just shy of 12.8% of my computer's memory (16GB), so we're looking at just over 2GB memory |
Do you know what libraries are pulled in? Which model are you using? I remember there being a TFLite model as well which is built for mobile apps and embedded systems. |
We have up to 8 GB of memory so it won't cause any serious issues but it does increase the cost per device. |
I'm using this model: https://github.com/mozilla/DeepSpeech/releases/download/v0.7.4/deepspeech-0.7.4-models.pbmm Just realized it's not the latest one (0.8.0) so I'll download that when my internet starts working again and give that a shot. Not sure what all the libraries being pulled in are |
See if you can work with the TFLite model. That is built to be a bit more lightweight. |
what about this one?
|
I give that model a shot later tonight @mfekadu! or @hhokari can try that one out. Here are some metrics from a sample usage of the tflite model: NOTES:
|
Super cool @Jason-Ku Perhaps we can make good use of the extra memory by fine-tuning the pre-trained model to ensure that domain-specific words will work (e.g. |
Not sure how to get this working, unzipped it and theres no model files here, just a bunch of hex data might need to compile in c |
It's setup for ARM - I'll test it on a raspberry pi |
Running on pi, looks like it needs SoX installed:
|
@Jason-Ku 's memory profiling python script #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function
import argparse
import numpy as np
import shlex
import subprocess
import sys
import wave
import json
import time
from deepspeech import Model, version
from timeit import default_timer as timer
from memory_profiler import memory_usage
try:
from shhlex import quote
except ImportError:
from pipes import quote
def convert_samplerate(audio_path, desired_sample_rate):
sox_cmd = 'sox {} --type raw --bits 16 --channels 1 --rate {} --encoding signed-integer --endian little --compression 0.0 --no-dither - '.format(quote(audio_path), desired_sample_rate)
try:
output = subprocess.check_output(shlex.split(sox_cmd), stderr=subprocess.PIPE)
except subprocess.CalledProcessError as e:
raise RuntimeError('SoX returned non-zero status: {}'.format(e.stderr))
except OSError as e:
raise OSError(e.errno, 'SoX not found, use {}hz files or install it: {}'.format(desired_sample_rate, e.strerror))
return desired_sample_rate, np.frombuffer(output, np.int16)
def metadata_to_string(metadata):
return ''.join(token.text for token in metadata.tokens)
def words_from_candidate_transcript(metadata):
word = ""
word_list = []
word_start_time = 0
# Loop through each character
for i, token in enumerate(metadata.tokens):
# Append character to word if it's not a space
if token.text != " ":
if len(word) == 0:
# Log the start time of the new word
word_start_time = token.start_time
word = word + token.text
# Word boundary is either a space or the last character in the array
if token.text == " " or i == len(metadata.tokens) - 1:
word_duration = token.start_time - word_start_time
if word_duration < 0:
word_duration = 0
each_word = dict()
each_word["word"] = word
each_word["start_time "] = round(word_start_time, 4)
each_word["duration"] = round(word_duration, 4)
word_list.append(each_word)
# Reset
word = ""
word_start_time = 0
return word_list
def metadata_json_output(metadata):
json_result = dict()
json_result["transcripts"] = [{
"confidence": transcript.confidence,
"words": words_from_candidate_transcript(transcript),
} for transcript in metadata.transcripts]
return json.dumps(json_result, indent=2)
class VersionAction(argparse.Action):
def __init__(self, *args, **kwargs):
super(VersionAction, self).__init__(nargs=0, *args, **kwargs)
def __call__(self, *args, **kwargs):
print('DeepSpeech ', version())
exit(0)
@profile
def stt():
parser = argparse.ArgumentParser(description='Running DeepSpeech inference.')
parser.add_argument('--model', required=True,
help='Path to the model (protocol buffer binary file)')
parser.add_argument('--scorer', required=False,
help='Path to the external scorer file')
parser.add_argument('--audio', required=True,
help='Path to the audio file to run (WAV format)')
parser.add_argument('--beam_width', type=int,
help='Beam width for the CTC decoder')
parser.add_argument('--lm_alpha', type=float,
help='Language model weight (lm_alpha). If not specified, use default from the scorer package.')
parser.add_argument('--lm_beta', type=float,
help='Word insertion bonus (lm_beta). If not specified, use default from the scorer package.')
parser.add_argument('--version', action=VersionAction,
help='Print version and exits')
parser.add_argument('--extended', required=False, action='store_true',
help='Output string from extended metadata')
parser.add_argument('--json', required=False, action='store_true',
help='Output json from metadata with timestamp of each word')
parser.add_argument('--candidate_transcripts', type=int, default=3,
help='Number of candidate transcripts to include in JSON output')
args = parser.parse_args()
print('Loading model from file {}'.format(args.model), file=sys.stderr)
model_load_start = timer()
# sphinx-doc: python_ref_model_start
ds = Model(args.model)
# sphinx-doc: python_ref_model_stop
model_load_end = timer() - model_load_start
print('Loaded model in {:.3}s.'.format(model_load_end), file=sys.stderr)
if args.beam_width:
ds.setBeamWidth(args.beam_width)
desired_sample_rate = ds.sampleRate()
if args.scorer:
print('Loading scorer from files {}'.format(args.scorer), file=sys.stderr)
scorer_load_start = timer()
ds.enableExternalScorer(args.scorer)
scorer_load_end = timer() - scorer_load_start
print('Loaded scorer in {:.3}s.'.format(scorer_load_end), file=sys.stderr)
if args.lm_alpha and args.lm_beta:
ds.setScorerAlphaBeta(args.lm_alpha, args.lm_beta)
fin = wave.open(args.audio, 'rb')
fs_orig = fin.getframerate()
if fs_orig != desired_sample_rate:
print('Warning: original sample rate ({}) is different than {}hz. Resampling might produce erratic speech recognition.'.format(fs_orig, desired_sample_rate), file=sys.stderr)
fs_new, audio = convert_samplerate(args.audio, desired_sample_rate)
else:
audio = np.frombuffer(fin.readframes(fin.getnframes()), np.int16)
audio_length = fin.getnframes() * (1/fs_orig)
fin.close()
print('Running inference.', file=sys.stderr)
inference_start = timer()
# sphinx-doc: python_ref_inference_start
if args.extended:
print(metadata_to_string(ds.sttWithMetadata(audio, 1).transcripts[0]))
elif args.json:
print(metadata_json_output(ds.sttWithMetadata(audio, args.candidate_transcripts)))
else:
print(ds.stt(audio))
# sphinx-doc: python_ref_inference_stop
inference_end = timer() - inference_start
print('Inference took %0.3fs for %0.3fs audio file.' % (inference_end, audio_length), file=sys.stderr)
if __name__ == '__main__':
mem_usage = memory_usage(stt)
print('Memory usage (in chunks of .1 seconds): %s' % mem_usage)
print('Maximum memory usage: %s' % max(mem_usage)) |
I was just able to get deep speech running; really cool! |
Ran on a Raspberry Pi 4b with 1gb of ram: |
For some reason that audiofile WAVE: RIFF header not found
|
I realized that my audiofile was corrupted during the download. re-downloaded and that fixed it. New issue: |
The screenshot above is also using the Here is a link to the docs about the pre-trained models |
Some more interesting info on preamble10.wav - potentially to do with why it took so long to process: |
Audio data should be read from the audio stream buffer and stored on RAM. That is what we do for the wake-word on NIMBUS and the GCP STT API. |
Based on their documentation, they seem to use 16 KHz although the Baidu paper suggests that both 16 KHz and 8 KHz datasets were used. They seem to use Sox to resample their data. That process might add some time. |
Objective
Explore offline Speech-To-Text (STT) libraries that will convert raw audio bytes to a string.
Key Result
Create a function that will output a string from raw audio bytes input.
Details
The function will take, as input, raw audio bytes. The properties of the audio is TBD. The raw audio bytes are then converted to a string by an offline/local STT library. Beyond memory, the priority should be a library that allows custom speech adaption. Speech adaption will allow some sort of user input (list of words, transcripts, etc) to disambiguate uncommon words.
When selecting the appropriate library, priorities are as follows:
The text was updated successfully, but these errors were encountered: