This toolkit provides the voice activity detection (VAD) code and our recorded dataset.
VAD toolkit in this project was used in the paper:
J. Kim and M. Hahn, "Voice Activity Detection Using an Adaptive Context Attention Model," in IEEE Signal Processing Letters, vol. PP, no. 99, pp. 1-1.
URL: https://ieeexplore.ieee.org/document/8309294/
If our VAD toolkit supports your research, we are very appreciated if you cite this paper.
ACAM is based on the recurrent attention model (RAM) [1] and the implementation of RAM can be found in jlindsey15 and jtkim-kaist's repository.
VAD in this toolkit follows the procedure as below:
In this toolkit, we use the multi-resolution cochleagram (MRCG) [2] for the acoustic feature implemented by matlab. Note that MRCG extraction time is relatively long compared to the classifier.
This toolkit supports 4 types of MRCG based classifer implemented by python with tensorflow as follows:
Python 3
Tensorflow 1.1-3
Matlab 2017b (will be depreciated)
The default model provided in this toolkit is the trained model using our dataset. The used dataset is described in our submitted paper.
The example matlab script is main.m
. Just run it on the matlab.
The result will be like following figure.
Note: To apply this toolkit to other speech data, the speech data should be sampled with 16kHz sampling frequency.
Many people want to the post-processing so I updated.
In py branch, you can see some parameters in utils.vad_func in main.py
Each parameter can handle following errors.
FEC: hang_before
MSC: off_on_length
OVER: hang_over
NDS: on_off_length
Note that there is NO optimal one. The optimal parameter set is according to the application.
Enjoy.
Note: Do not forget adding the path to this project in the matlab.
# train.sh
# train script options
# m 0 : ACAM
# m 1 : bDNN
# m 2 : DNN
# m 3 : LSTM
# e : extract MRCG feature (1) or not (0)
python3 $train -m 0 -e 1 --prj_dir=$curdir
Our recored dataset is freely available: Download
Bus stop, construction site, park, and room.
A smart phone (Samsung Galaxy S8)
At each environment, conversational speech by two Korean male speakers was recorded. The ground truth labels are manually annotated. Because the recording was carried out in the real world, unexpected noises are included to the dataset such as the crying of baby, the chirping of insects, mouse click sound, and etc. The details of dataset is described in the following table:
Bus stop | Cons. site | Park | Room | Overall | |
---|---|---|---|---|---|
Dur. (min) | 30.02 | 30.03 | 30.07 | 30.05 | 120.17 |
Avg. SNR (dB) | 5.61 | 2.05 | 5.71 | 18.26 | 7.91 |
% of speech | 40.12 | 26.71 | 26.85 | 30.44 | 31.03 |
If you find any errors in the code, please contact to us.
E-mail: jtkim@kaist.ac.kr
Copyright (c) 2017 Speech and Audio Information Laboratory, KAIST, South Korea
License
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
[1] J. Ba, V. Minh, and K. Kavukcuoglu, “Multiple object recognition with visual attention,” arXiv preprint arXiv, 1412.7755, 2014.
[2] Zhang, Xiao-Lei, and DeLiang Wang. “Boosting contextual information for deep neural network based voice activity detection,” IEEE Trans. Audio, Speech, Lang. Process., vol. 24, no. 2, pp. 252-264, 2016.
[3] Zazo Candil, Ruben, et al. “Feature learning with raw-waveform CLDNNs for Voice Activity Detection.”, 2016.
Jaeseok, Kim (KAIST) contributed to this project for changing matlab script to python.