hirofumi0810 / neural_sp

End-to-end ASR/LM implementation with PyTorch
Apache License 2.0
589 stars 140 forks source link

Fix bug in resuming multi-GPU training #329

Closed hirofumi0810 closed 3 years ago

hirofumi0810 commented 3 years ago

Stopped transferring optimizer parameters to GPU before GPU setting

codecov-commenter commented 3 years ago

Codecov Report

Merging #329 (1c0d6fa) into master (17ffccc) will decrease coverage by 0.05%. The diff coverage is 46.15%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #329      +/-   ##
==========================================
- Coverage   68.09%   68.03%   -0.06%     
==========================================
  Files          96       96              
  Lines       10831    10832       +1     
==========================================
- Hits         7375     7370       -5     
- Misses       3456     3462       +6     
Impacted Files Coverage Δ
neural_sp/bin/lm/train.py 73.42% <0.00%> (-0.34%) :arrow_down:
neural_sp/bin/asr/train.py 65.54% <50.00%> (-0.38%) :arrow_down:
neural_sp/trainers/lr_scheduler.py 80.30% <50.00%> (-2.14%) :arrow_down:
neural_sp/bin/train_utils.py 79.26% <100.00%> (ø)
neural_sp/models/base.py 58.33% <0.00%> (-1.67%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 17ffccc...1c0d6fa. Read the comment docs.