parlance / ctcdecode

PyTorch CTC Decoder bindings
MIT License
830 stars 245 forks source link

problem with CTCBeamDecoder.decode() when using a big (.arpa / .binary) file #205

Open aybberrada opened 2 years ago

aybberrada commented 2 years ago

i'm interested in using the kenlm LM to decode/score outputs of my speech recognition model.

when I initiate my CTCBeamDecoder with model_path='./test.arpa', which is a pretty small .arpa file just for testing, ~4kb, i encounter no problem and CTCBeamDecoder.decode() outputs with no issue at all.

but when I try using the correct .arpa file for my project ( 3-gram.pruned.1e-7.arpa.gz ) which is ~90mb, it either crashes instantly or takes forever and doesn't output anything. I built a .binary file for this .arpa file to use it , but I encounter the same problem.

i tracked the problem and the issue is in ctc_decode.paddle_beam_decode_lm

is it simply because it requires a LOT of RAM to do inference with a big .arpa file ? (i got 8gb) if it's the case how much ram i need to do inference with such file?

afmsaif commented 1 year ago

I am facing the same problem. Have you solve it?