AI4Bharat / IndicWav2Vec

Pretraining, fine-tuning and evaluation scripts for Indic-Wav2Vec2
https://indicnlp.ai4bharat.org/indicwav2vec
MIT License
82 stars 28 forks source link

Decoding is slow. Anything to make it fast? It is not using multiple cores. At least the default commands in the README. #9

Closed raotnameh closed 2 years ago

raotnameh commented 2 years ago

I am using docker.

RamanHacks commented 2 years ago

Hi @raotnameh , decoding depends on the beam size, size of language model, etc. Also the current fairseq implementation doesn't provide for batch inference, hence the process is slow (and runs on single-thread)! We are currently working on to release a HF-compatible model. That should speed up the decoding (through batch inferencing).