secretsauceai / precise-wakeword-model-maker

Automated, end-to-end wakeword model maker using the Precise Wakeword Engine
Apache License 2.0
19 stars 4 forks source link
acoustic-model hotword-detection machine-learning nlp wakeword wakeword-activation

Secret Sauce AI

Precise Wakeword Model Maker

Wake word

Do you want your own personal wake word?

The Precise Wakeword Model Maker takes a sparse amount of data and creates a production quality wakeword model with Mycroft Precise. It's part of the Secret Sauce AI Wakeword Project.

The Precise Wakeword Model Maker pulls out all of the tricks in AI to turn a very sparse data set into a production quality model.

How does it work?

A user follows a data collection recipe

data collection recipe It all starts with a user data collection for the wakeword and not-wakeword categories. A user can use the Wakeword Data Collector

Precise Wakeword Model Maker recipes

TTS voice data generation

TTS voices recipe When you don't have enough data to train a model, generate it. TTS engines are scraped similar to the data collection recipe using TTS plugins from OpenVoiceOS. The more the better!

Best model selection

best model selection recipe How do you know if your test-training distibution yields the best model? When it comes to big data sets, randomly splitting it once (ie 80/20%) is usually good enough. However, when dealing with sparse data sets the initial test-training split becomes more important. By splitting the data set many times and training experimental models, the best initial data distribution can be found. This step can boost the model by as much as ~10% performance on the training set.

Incremental and curriculum learning

learning recipe Only add false positives(*) to the training/test set. Why add a bunch of files that the model can classify correctly, when you can give the model lessons where it needs to improve.

Speaking of lessons, you don't learn by reading pages of a text book in a totally random order, do you? Why should a machine learning model be subjected to this added difficulty in learning? Let the machine learn with an ordered curriculum of data. This usually boosts the model's performance over the shotgun approach by 5%-10%. Not bad!

(*)NOTE: This actually worsens the raw score of the model, because it only trains and tests on hard to learn examples, instead of giving the model an easy A. But honestly, if you are getting 98% on your test and/or training set and it doesn't actually work correctly in the real world, you really need to reconsider your machine learning strategy. ;)

Noise generation recipes

noise generation recipe Gaussian noise (static) is mixed into the pre-existing audio recordings, this helps make the model more robust and helps with generalization of the model.

A user can use other noisy data sets (ie pdsounds) to generate background noise into existing audio files, further ensuring a robust model that can wake up even in noisy environments.

Installation

Manually installing with Python

Precise requires Python 3.7 (for tensorflow 1.13 support)

Docker

docker run -it \
  -v "local_directory_for_model_output:/app/out" \
  -v "local_collected_audio_directory:/data" \
  -v "local_directory_path_for_config/:/app/config" \
  bartmoss/precise-wakeword-model-maker

Configuration

Usage

Note: don't forget to activate your venv source .venv/bin/activate

Run python data_prep to start the Precise Wakeword Model Maker, or run in the command line with arguments:

Precise Wakeword Model Maker menu

tl;dr If you're sure you installed and configured everything correctly, and got all of the data you need, then go ahead and run through the steps or 5. Do it all.

Just make sure you know: it will take A LONG time to run everything.

1. Generate TTS wakeword data

The wakeword and wakeword syllables in config/TTS_wakeword_config.json are used to scrape the TTS voices in config/TTS_engine_config.json. The results will be in out/TTS_generated_converted/.

There are three types of resulting files:

The syllables and sequential permutations are vital to ensure that the model doesn't get lazy and focus on parts of the wakeword, but the whole wakeword.

IMPORTANT: check each wakeword file in out/TTS_generated_converted/wake-word/TTS/ and discard any samples where the wakeword is mispronounced before moving on to any other steps.

2. Optimally split and create a base model from wake-word-recorder data

For effective machine learning, we need to have a good training and test set. This step uses the audio collected from audio_source_directory in config/data_prep_user_configuration.json and generated by TTS (see above) to create 10 different distributions between the test and training set, then trains an experimental model for each and finally keeps the one with the lowest loss (the model with the highest training set accuracy) renaming the model and its ditectory of data to your wakeword_model_name in config/data_prep_user_configuration.json, out/wakeword_model_name/.

The experimental directories and models are temporarily stored in out/ as experiment_n where n is the number of the experiment.

The data is split in different ways, depending on the kind of data. This can be configured in config/data_prep_system_configuration.json. Unless you are using another source to collect data than Wakeword Data Collector, these settings should work fine.

The TTS generated data is split 80/20%.

Finally, the model will be incrementally trained to find false-positives from the random recordings (ie TV and natural conversations) in audio_source_directory/random/user_collected/ where audio_source_directory is configured in config/data_prep_user_configuration.json and benchmarked.

3. Generate extra data

Gaussian and background noise (ie pdsounds) is mixed is mixed into the audio files to produce further audio files.

The list of directories for both are in config/data_prep_system_configuration.json:

Finally, the model is trained on this data and benchmarked.

4. Generate further data from other data sets

Although a lot of training and testing has gone on by now, the model has not yet reached production quality. It is very important to incrementally test and train it on as much not-wakeword data as possible to find potential false wake ups.

You should download at least one very large data set (at least 50,000 random utterances of many people speaking into different mics), such as common voice. This data set can be in mp3 or wav format, all non-user-collected data sets are automatically converted from mp3 or even wav to wav with 16000 sample rate. Please read Data below for more information about these data sets and where to download them.

These data sets can be added into config/data_prep_user_configuration.json where extra_audio_directories_to_process is the list of the directories where the data sets sources are (it is important to configure the directories directly to where the mp3 or wav files can be found) and extra_audio_directories_labels are the labels (sub directories) they will be stored into (ie non-utterances, utterances, etc. in out/wakeword_model_name/random/. Each directory must have a label.

5. Do it all

You can do it all!

6. Exit

Always know your escape route.

Data

It is important to note that downloading a lot of data is vital to producing a bullet proof wake word model. In addition, it is important to note that data prep does not walk through sub directories of sound files. It only processes the top level directory. It is best to just dump audio files in the top level directory. The files can be in mp3 or wav format, data prep will convert them to wav with the the sample rate of 16000.

Running your wakeword model

The resulting model will be a TensorFlow 1.13 precise wakeword model. It can be easily run with precise-listen wakeword_model_name.net, configured to be run in Mycroft or even converted to TensorFlow lite and be run by the TensorFlow lite runner.

Secret Sauce AI

Special thanks and dedication

Although Secret Sauce AI is always about collaboration and community, special thanks should go to Joshua Ferguson for doing so much testing and code refactoring. We also extend a very warm thanks to the folks over at Mycroft, without whom there would be no FOSS tensorflow wakeword engine.

In loving memory of Heinz Sieber

-Bartmoss