flashlight / wav2letter

Facebook AI Research's Automatic Speech Recognition Toolkit
https://github.com/facebookresearch/wav2letter/wiki
Other
6.37k stars 1.01k forks source link

Run Decoder binary with mutiple GPUs #958

Closed tranmanhdat closed 3 years ago

tranmanhdat commented 3 years ago

i cant run Decoder binary with 4 GPUs, i trained with 4 gpus but cant with decode command run decoder binary: _./build/Decoder --am /006_model_last.bin --test /train.lst --nthread_decoder 12 --lexicon /unigram-7211-nbest10.lexicon --uselexicon true --decodertype wrd --beamsize 100 --beamsize 100 --beamthreshold 20 --lmweight 0.6 --wordscore 0.6 --eosscore 0 --silscore 0 --unkscore 0 --smearing max -sclite /decodelog/

tlikhomanenko commented 3 years ago

You need to specify --nthread_decoder_am_forward=4 which will use 4 gpus to run forward pass on your test data. --nthread_decoder is used for decoding itself which happens on cpu in your case for zero lm or kenlm language models.

tranmanhdat commented 3 years ago

yeah it's really help, thanks