Open kamilkk852 opened 5 years ago
Same experience here, also with a language model.
I have the same problem. Have you guys found any way to accelerate it?
In this code, the author said that we can speed up with --njobs, however, when i used multi threads, the code had the problem : joblib.externals.loky.process_executor.BrokenProcessPool. have you guys found any way to solve problem?
Same experience here.
I have the same problem. Does someone has the solution?
During training validation is running at about 3-5 iteration per second (batch size = 16), but inference is extremely slow - 4 minutes per example, which makes it completely impractical.
I'm using GeForce RTX 2080 Ti, beam size = 20, no language model, only seq2seq.