Closed sxjscience closed 3 years ago
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1461/fix_benchmark/index.html
Merging #1461 (1d554bd) into master (675b7c3) will not change coverage. The diff coverage is
n/a
.
@@ Coverage Diff @@
## master #1461 +/- ##
=======================================
Coverage 85.80% 85.80%
=======================================
Files 52 52
Lines 6855 6855
=======================================
Hits 5882 5882
Misses 973 973
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 675b7c3...1d554bd. Read the comment docs.
Description
I noticed that we need to manually disable multiprocessing.
Also, the original huggingface benchmark has not called
torch.cuda.synchronize()
. The comparison won't be fair because the CUDA calls are async. I added thetorch.cuda.synchronize()
in the benchmarking script. (See HF implemenation https://github.com/huggingface/transformers/blob/ab17758874f62c03b6e5627f846a697920b16dd8/src/transformers/benchmark/benchmark.py#L171-L194).@Cli212
Checklist
Essentials
cc @dmlc/gluon-nlp-team