alumae / gst-kaldi-nnet2-online

GStreamer plugin around Kaldi's online neural network decoder
Apache License 2.0
185 stars 100 forks source link

Postprocessing skips words in final result #74

Open ldeseynes opened 5 years ago

ldeseynes commented 5 years ago

Hi Tanel ! First of all thanks for your great job.

I'm using gst-kaldi-nnet2-online to decode french speech. When running the client.py script, I get a quite correct transcription using my model but at some point the decoding stops and starts again a few seconds later. This results in missing words in the output. Here is an example of the result with two dots corresponding to the missing words at the end of each sentence:

`bonjour , je m' appelle Jean-Christophe je suis agriculteur dans le Loiret sur une exploitation céréalières . je me suis installé il y a une dizaine d' années ..

trente-cinq ans aujourd' hui . je suis papa de trois enfants .. j' ai repris l' exploitation qui était consacré à la culture de la betterave sucrière de céréales depuis que je suis installé diversifiés .. j' y cultive aujourd' hui des oléagineux , comme le colza du maïs .. plusieurs types de céréales ..` Everytime this occurs, I get a warning from worker.py: `WARNING ([5.5.76~1-535b]:LatticeWordAligner():word-align-lattice.cc:263) [Lattice has input epsilons and/or is not input-deterministic (in Mohri sense)]-- i.e. lattice is not deterministic. Word-alignment may be slow and-or blow up in memory.` Any idea about that issue ? Maybe there is a way to control the length of the final result. For example I get the following output from worker.py: `2018-10-16 16:59:55 - INFO: decoder2: 2faa241b-89ed-497a-9c2a-3b974bf1f8da: Got final result: bonjour , je m' appelle Jean-Christophe je suis agriculteur dans le Loiret sur une exploitation céréalières . je me suis installé il y a une dizaine d' années .` How can I change the code to split the results in two shorter ones ? Thanks in advance!
tingyang01 commented 4 years ago

Hello, ldeseynes, I am wonderful that you get some considerable result for french stt. I've also tried to use gst-kaldi-nnet2-online to decode french speech. but I've got only one word for a call. So I've tested it on offline-decode, then it gives considerable result. I configured worker.yaml like this:

use-nnet2: True decoder:

All the properties nested here correspond to the kaldinnet2onlinedecoder GStreamer plugin properties.

# Use gst-inspect-1.0 ./libgstkaldionline2.so kaldinnet2onlinedecoder to discover the available properties
use-threaded-decoder: True
model : test/models/french/librifrench/final.mdl
word-syms : test/models/french/librifrench/words.txt
fst : test/models/french/librifrench/HCLG.fst
mfcc-config : test/models/french/librifrench/conf/mfcc.conf
ivector-extraction-config : test/models/french/librifrench/conf/ivector_extractor.conf
max-active: 10000
beam: 40.0
lattice-beam: 6.0
acoustic-scale: 0.083
do-endpointing : true
endpoint-silence-phones : "1:2:3:4:5:6:7:8:9:10"
traceback-period-in-secs: 0.25
chunk-length-in-secs: 0.25
num-nbest: 10
#Additional functionality that you can play with:
#lm-fst:  test/models/english/librispeech_nnet_a_online/G.fst
#big-lm-const-arpa: test/models/english/librispeech_nnet_a_online/G.carpa
#phone-syms: test/models/english/librispeech_nnet_a_online/phones.txt
#word-boundary-file: test/models/english/librispeech_nnet_a_online/word_boundary.int
#do-phone-alignment: true

out-dir: tmp

use-vad: False silence-timeout: 10

Just a sample post-processor that appends "." to the hypothesis

post-processor: perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\1./;' Then I've got following log message.

I am very grateful what's wrong on my configuration. How can I get correct result? Thanks in helpness.

ldeseynes commented 4 years ago

Hi, In your yaml config file, set acoustic-sclale to 1.0 and add frame-subsampling-factor: 3

tingyang01 commented 4 years ago

Hello, I changed config file and tested again. """ use-nnet2: True decoder:

All the properties nested here correspond to the kaldinnet2onlinedecoder GStreamer plugin properties.

# Use gst-inspect-1.0 ./libgstkaldionline2.so kaldinnet2onlinedecoder to discover the available properties
use-threaded-decoder: True
model : test/models/french/librifrench/final.mdl
word-syms : test/models/french/librifrench/words.txt
fst : test/models/french/librifrench/HCLG.fst
mfcc-config : test/models/french/librifrench/conf/mfcc.conf
ivector-extraction-config : test/models/french/librifrench/conf/ivector_extractor.conf
max-active: 10000
beam: 13.0
lattice-beam: 8.0
acoustic-scale: 1.0
frame-subsampling-factor: 3
#acoustic-scale: 0.083
do-endpointing : true
#endpoint-silence-phones : "1:2:3:4:5:6:7:8:9:10"
endpoint-silence-phones : "1:2:3:4:5"
traceback-period-in-secs: 0.25
chunk-length-in-secs: 0.25
num-nbest: 10
#Additional functionality that you can play with:
#lm-fst:  test/models/english/librispeech_nnet_a_online/G.fst
#big-lm-const-arpa: test/models/english/librispeech_nnet_a_online/G.carpa
#phone-syms: test/models/english/librispeech_nnet_a_online/phones.txt
#word-boundary-file: test/models/english/librispeech_nnet_a_online/word_boundary.int
#do-phone-alignment: true

out-dir: tmp """ So I've got following result. """ une. l' ai.jamais. de. Audio sent, now sending EOS de. une. l' ai. de. """ Could you share your parameter? I can share my model. I hope to get your helpness.

tingyang01 commented 4 years ago

I just found following command: online2-wav-nnet2-latgen-faster --online=true --do-endpointing=false --config=exp/nnet2_online/nnet_ms_a_online/conf/online_nnet2_decoding.conf --max-active=7000 --beam=15.0 --lattice-beam=6.0 --acoustic-scale=0.1 --word-symbol-table=exp/tri4b/graph_SRILM/words.txt exp/nnet2_online/nnet_ms_a_online/final.mdl exp/tri4b/graph_SRILM/HCLG.fst ark:data/test_hires/split8/1/spk2utt 'ark,s,cs:extract-segments scp,p:data/test_hires/split8/1/wav.scp data/test_hires/split8/1/segments ark:- |' 'ark:|gzip -c > exp/nnet2_online/nnet_ms_a_online/decode_SRILM/lat.1.gz'

It decodes audio files like this:LOG (online2-wav-nnet2-latgen-faster[5.5.463~1-9f3d8]:main():online2-wav-nnet2-latgen-faster.cc:276) Decoded utterance 13-1410-0030 13-1410-0031 rez de LOG (online2-wav-nnet2-latgen-faster[5.5.463~1-9f3d8]:main():online2-wav-nnet2-latgen-faster.cc:276) Decoded utterance 13-1410-0031 13-1410-0032 de LOG (online2-wav-nnet2-latgen-faster[5.5.463~1-9f3d8]:main():online2-wav-nnet2-latgen-faster.cc:276) Decoded utterance 13-1410-0032 13-1410-0033 rit de LOG (online2-wav-nnet2-latgen-faster[5.5.463~1-9f3d8]:main():online2-wav-nnet2-latgen-faster.cc:276) Decoded utterance 13-1410-0033 13-1410-0034 de LOG (online2-wav-nnet2-latgen-faster[5.5.463~1-9f3d8]:main():online2-wav-nnet2-latgen-faster.cc:276) Decoded utterance 13-1410-0034 13-1410-0035 de

I am wonderful if you carefully check it.

ldeseynes commented 4 years ago

Hi,

Here are the parameters I set but I have not used the system for a while. In your yaml file, you should add nnet-mode: 3. Also, check that you're decoding your audio file with the correct sample rate and number of channels.

use-threaded-decoder=true nnet-mode=3 frame-subsampling-factor=3 acoustic-scale=1.0 model=models/final.mdl fst=models/HCLG.fst word-syms=models/words.txt phone-syms=models/phones.txt word-boundary-file=models/word_boundary.int num-nbest=10 num-phone-alignment=3 do-phone-alignment=true feature-type=mfcc mfcc-config=models/conf/mfcc.conf ivector-extraction-config=models/conf/ivector_extractor.conf max-active=1000 beam=11.0 lattice-beam=5.0 do-endpointing=true endpoint-silence-phones="1:2:3:4:5:6:7:8:9:10" chunk-length-in-secs=0.23 phone-determinize=true determinize-lattice=true frames-per-chunk=10

tingyang01 commented 4 years ago

Thanks for your kindly reply. you seems using NNet3 online model. I've trained NNet2 online model. So I've tried to use nnet-mode=3, but engine was crushed. let meow your idea for it. Best, Ting

ldeseynes commented 4 years ago

What's your command to start the scripts ?

tingyang01 commented 4 years ago

I am using gstreamer server and client by using https://github.com/alumae/kaldi-gstreamer-server like this:

french_stt.yaml as follows: use-nnet2: True decoder: use-threaded-decoder: True model : test/models/french/librifrench/final.mdl word-syms : test/models/french/librifrench/words.txt fst : test/models/french/librifrench/HCLG.fst mfcc-config : test/models/french/librifrench/conf/mfcc.conf ivector-extraction-config : test/models/french/librifrench/conf/ivector_extractor.conf max-active: 1000 beam: 13.0 lattice-beam: 8.0 acoustic-scale: 1.0 frame-subsampling-factor: 3 do-endpointing : true nnet-mode : 2 endpoint-silence-phones : "1:2:3:4:5" traceback-period-in-secs: 0.25 chunk-length-in-secs: 0.25 num-nbest: 10 frames-per-chunk : 10 out-dir: tmp

use-vad: False silence-timeout: 10

post-processor: perl -npe 'BEGIN {use IO::Handle; STDOUT->autoflush(1);} s/(.*)/\1./;'

logging: version : 1 disable_existing_loggers: False formatters: simpleFormater: format: '%(asctime)s - %(levelname)7s: %(name)10s: %(message)s' datefmt: '%Y-%m-%d %H:%M:%S' handlers: console: class: logging.StreamHandler formatter: simpleFormater level: DEBUG root: level: DEBUG handlers: [console]

I've confirmed test.wav is 16KHz, 16bit, mono.

Let me know your idea for it. I look forward from you.

ldeseynes commented 4 years ago

This looks fine to me. Just check the parameters you used for your training (acoustic scale and frame subsampling factor) because I'm not sure about their value in the nnet2 setup. Anyway you'd rather use a later model if you want to get decent results.

tingyang01 commented 4 years ago

Thanks for your reply. Did you think ever about nnet-latgen-faster and online2-wav-nnet2-latgen-faster of kaldi? I think there may be some problem in the difference above two decoding method. And could you tell me more about a later model?

ldeseynes commented 4 years ago

Just use a chain model, you'll get better results and far more details about the recipe

tingyang01 commented 4 years ago

I've built French STT model by using wsj/s5/local/online/run_nnet2.sh. Thanks let me try again.

tingyang01 commented 4 years ago

One thing, You mean NNet3 chain model? Could you tell me which script shall I use?

ldeseynes commented 4 years ago

Sure, you can retrain a model using tedlium/s5_r3/run.sh. You don't need the rnnlm stuff after stage 18 for your Gstreamer application

tingyang01 commented 4 years ago

Thank you, will try.