Even though Voice Activity Detection is present in the repository, I don't see the "interface.py" call the given VAD procedure helper methods (the init_noise() method and the filter() method, respectively) on input data. Seems like the GMM's are trained straight on generated features. Why is it this way?
If I am wrong, can you point me to the location where VAD is being done on enrolled speech data?
Thanks
Hi
Even though Voice Activity Detection is present in the repository, I don't see the "interface.py" call the given VAD procedure helper methods (the init_noise() method and the filter() method, respectively) on input data. Seems like the GMM's are trained straight on generated features. Why is it this way?
If I am wrong, can you point me to the location where VAD is being done on enrolled speech data? Thanks