jzlianglu / pykaldi2

Yet another speech toolkit based on Kaldi and PyTorch
MIT License
173 stars 33 forks source link

SETraining is extremely slow #21

Open SiyuanWei opened 4 years ago

SiyuanWei commented 4 years ago

Hi, I have completed the gmm stage and excute the CE training stage of transformer based on a tri4b system,. But when I'm trying a SE training on a Tesla P100 ,it is extremely slow as the following pict shows. I wonder is it normal? Capture

SiyuanWei commented 4 years ago

And I cProfile the SE script and it shows the pykaldi asr decode funtion is the bottleneck

jzlianglu commented 4 years ago

Hi, this is not normal. It should not be as slow as this. Can you let me know your configs, like the batch_size, decoding graph, etc?

SiyuanWei commented 4 years ago

I use 8 batch size and decoding graph and transition model under a tri4b system of kaldi librispeech examples. Other configurations shouldn't infulence the decoding speed...emm

jzlianglu commented 4 years ago

The normal training speed should be like

:18:19.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:=> loaded checkpoint '/datablob/users/lial/pykaldi2/seed_model/model.6.tar' 2019-12-16T19:21:47.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:Epoch: [0][ 0/70311] Time 205.143 (205.143) Loss 5.7131e+00 (5.7131e+00) grad_norm 2.5644e+03 (2.5644e+03) 2019-12-16T19:22:13.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:Epoch: [0][ 10/70311] Time 233.091 (220.117) Loss 5.9513e+00 (5.8603e+00) grad_norm 2.6643e+03 (2.3814e+03) 2019-12-16T19:22:39.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:Epoch: [0][ 20/70311] Time 258.765 (233.209) Loss 6.0196e+00 (5.8791e+00) grad_norm 1.8992e+03 (2.3806e+03) 2019-12-16T19:23:06.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:Epoch: [0][ 30/70311] Time 285.682 (246.008) Loss 5.7842e+00 (5.8806e+00) grad_norm 2.2057e+03 (2.3459e+03) 2019-12-16T19:23:32.000Z /container_e09_1568924618046_16257_01_000002: [1,0]:Epoch: [0][ 40/70311] Time 311.816 (259.190) Loss 5.8167e+00 (5.8767e+00) grad_norm 2.8424e+03 (2.2931e+03)

The batchsize here is 4, and I used 1 V100 GPU. It took less than 30 seconds for 10 updates. I feel that your model has been diverged in the early beginning of the training, which means it would take much longer time to decode the utterances since the model is poor. I would suggest to pay attention to the learning rate, the decoding configs, CE and/or L2 regularization. Note that, use a larger momentum is very helpful for SE training from my experiments.