Alexander-H-Liu / End-to-end-ASR-Pytorch

This is an open source project (formerly named Listen, Attend and Spell - PyTorch Implementation) for end-to-end ASR implemented with Pytorch, the well known deep learning toolkit.
MIT License
1.18k stars 317 forks source link

Inference is extremely slow #30

Open kamilkk852 opened 5 years ago

kamilkk852 commented 5 years ago

During training validation is running at about 3-5 iteration per second (batch size = 16), but inference is extremely slow - 4 minutes per example, which makes it completely impractical.

I'm using GeForce RTX 2080 Ti, beam size = 20, no language model, only seq2seq.

Kabur commented 5 years ago

Same experience here, also with a language model.

Mohammadelc commented 5 years ago

I have the same problem. Have you guys found any way to accelerate it?

wanghzhRun commented 5 years ago

In this code, the author said that we can speed up with --njobs, however, when i used multi threads, the code had the problem : joblib.externals.loky.process_executor.BrokenProcessPool. have you guys found any way to solve problem?

ByronHsu commented 4 years ago

Same experience here.

xjwla commented 3 years ago

I have the same problem. Does someone has the solution?