Closed novitoll closed 7 years ago
Just curious, why would you need that and which prepared layers do you mean? Pocketsphinx has nothing to do with neural networks. Am I missing something?
@gorinars, Right, pocketsphinx has nothing to do with NN, it has "plain" HMM model for acoustics. But I'm just curious how to speed it up. For example, 1:26:31 hour .wav file (166.1 MB disk memory, 16 KHz sample rate, pcm-16-le, monochannel) took ~ 37 min of ASR. Maybe there are some options where we could allocate heavy calculation (MFCCs for example) in GPU etc.
MFCC calculation is not as heavy as model computations and search. I am not sure if some gain from GPU can be achieved now.
Check http://cmusphinx.sourceforge.net/wiki/decodertuning for some hints on how to make decoding faster.
You can also consider cutting long wavs on segments and processing parts on different threads/machines if you do batch decoding.
Got it, thanks
Is it possible to run
pocketsphinx_continuous
command on GPU? Is there any utils or prepared layers for that? Thanks