Honkling is a novel web application with an in-browser keyword spotting system implemented with TensorFlow.js.
Honkling can efficiently identify simple commands (e.g., "stop" and "go") in-browser without a network connection. It demonstrates cross-platform speech recognition capabilities for interactive intelligent agents with its pure JavaScript implementation. For more details, please consult our writeup:
Honkling implements a residual convolutional neural network [1] and utilizes Speech Commands Dataset for training.
Node.js implementation of Honkling is also available under Honking-node folder.
Honkling-assistant is a customizable voice-enabled virtual assistants implemented using Honkling-node and Electron.
Details about Honkling-node and Honkling-assistant can be found in:
Honkling can be personalized to individual user by recognizing the accent. From our experiments it is found that only 5 recordings of individual keyword can increase accuracy by up to 10\%! With GPU, personalization can be achieved within only 8 seconds.
Pre-trained weights are available at Honkling-models.
Please run the following command to obtain pre-trained weights:
git submodule update --init --recursive
Please refer honkling
branch of honk to customize keyword set or train a new model.
Once you obtain weight file in json format using honk, move the file into weights/
directory and append weights[<wight_id>] =
to link it to weights object.
Depending on change, config.js has to be updated and a model object can be instantiated as let model = new SpeechResModel(<wight_id>, commands);
It is possible to evaluate the in-browser neural network inference performance of your device on the Evaluate Performance page of Honkling.
Evaluation is conducted on a subset of the validation and test sets used in training. Once the evaluation is complete, it will generate reports on input processing time (MFCC) and inference time.
As part of our research, we explored the network slimming [2] technique to analyze trade-offs between accuracy and inference latency. With honkling, it is possible to evaluate the performance on a pruned model as well!
The following is the evaluation result on Macbook Pro (2017) with Firefox:
Model | Amount Pruned (%) | Accuracy (%) | Innput Processing (ms) | Inference (ms) |
---|---|---|---|---|
RES8-NARROW | - | 90.78 | 21 | 10 |
RES8-NARROW-40 | 40 | 88.99 | 21 | 9 |
RES8-NARROW-80 | 80 | 84.90 | 22 | 9 |
RES8 | - | 93.96 | 23 | 24 |
RES8-40 | 40 | 93.99 | 23 | 17 |
RES8-80 | 80 | 91.66 | 22 | 11 |