I use GigaSpeech pretrained model. I successfully create LM and TLG.fst decoding graph. I run ./tools/decode.sh with wfst_decode_opts and correct fst_path. I'm getting correct decoding/rescoring partial and final results in log file and in result file, but I don't get any timestamp info in log or result files. I even try to run decoder_main with unit_path key (taking words.txt from pretrained model as input), but nothing changes.
Could you please tell me, how can I get the timestamps for the words?
And is there any way to get lattice as in Kaldi instead of text when using TLG graph for decoding?
Hi.
First, I want to thank you for the great work.
I use GigaSpeech pretrained model. I successfully create LM and TLG.fst decoding graph. I run ./tools/decode.sh with wfst_decode_opts and correct fst_path. I'm getting correct decoding/rescoring partial and final results in log file and in result file, but I don't get any timestamp info in log or result files. I even try to run decoder_main with unit_path key (taking words.txt from pretrained model as input), but nothing changes.
Could you please tell me, how can I get the timestamps for the words?
And is there any way to get lattice as in Kaldi instead of text when using TLG graph for decoding?
Thanks in advance.