castorini / honkling

Web app for keyword spotting using TensorflowJS
https://castorini.github.io/honkling/
MIT License
69 stars 13 forks source link

Honkling : JavaScript based Keyword Spotting System

Honkling is a novel web application with an in-browser keyword spotting system implemented with TensorFlow.js.

Honkling can efficiently identify simple commands (e.g., "stop" and "go") in-browser without a network connection. It demonstrates cross-platform speech recognition capabilities for interactive intelligent agents with its pure JavaScript implementation. For more details, please consult our writeup:

Honkling implements a residual convolutional neural network [1] and utilizes Speech Commands Dataset for training.

Honkling-node & Honkling-assistant

Node.js implementation of Honkling is also available under Honking-node folder.

Honkling-assistant is a customizable voice-enabled virtual assistants implemented using Honkling-node and Electron.

Details about Honkling-node and Honkling-assistant can be found in:

Personalization

Honkling can be personalized to individual user by recognizing the accent. From our experiments it is found that only 5 recordings of individual keyword can increase accuracy by up to 10\%! With GPU, personalization can be achieved within only 8 seconds.

Pre-trained Weights

Pre-trained weights are available at Honkling-models.

Please run the following command to obtain pre-trained weights:

git submodule update --init --recursive

Customizing Honkling

Please refer honkling branch of honk to customize keyword set or train a new model.

Once you obtain weight file in json format using honk, move the file into weights/ directory and append weights[<wight_id>] = to link it to weights object.

Depending on change, config.js has to be updated and a model object can be instantiated as let model = new SpeechResModel(<wight_id>, commands);

Performance Evaluation

It is possible to evaluate the in-browser neural network inference performance of your device on the Evaluate Performance page of Honkling.

Evaluation is conducted on a subset of the validation and test sets used in training. Once the evaluation is complete, it will generate reports on input processing time (MFCC) and inference time.

As part of our research, we explored the network slimming [2] technique to analyze trade-offs between accuracy and inference latency. With honkling, it is possible to evaluate the performance on a pruned model as well!

The following is the evaluation result on Macbook Pro (2017) with Firefox:

Model Amount Pruned (%) Accuracy (%) Innput Processing (ms) Inference (ms)
RES8-NARROW - 90.78 21 10
RES8-NARROW-40 40 88.99 21 9
RES8-NARROW-80 80 84.90 22 9
RES8 - 93.96 23 24
RES8-40 40 93.99 23 17
RES8-80 80 91.66 22 11

Reference

  1. Raphael Tang and Jimmy Lin. Deep Residual Learning for Small-Footprint Keyword Spotting. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), pages 5484-5488.
  2. Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, Changshui Zhang. Learning Efficient Convolutional Networks through Network Slimming. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV 2017), pages 2755-2763.