dmlc / gluon-nlp

NLP made easy
https://nlp.gluon.ai/
Apache License 2.0
2.56k stars 538 forks source link

[Benchmark] Update HuggingFace benchmarking #1461

Closed sxjscience closed 3 years ago

sxjscience commented 3 years ago

Description

I noticed that we need to manually disable multiprocessing.

Also, the original huggingface benchmark has not called torch.cuda.synchronize(). The comparison won't be fair because the CUDA calls are async. I added the torch.cuda.synchronize() in the benchmarking script. (See HF implemenation https://github.com/huggingface/transformers/blob/ab17758874f62c03b6e5627f846a697920b16dd8/src/transformers/benchmark/benchmark.py#L171-L194).

@Cli212

Checklist

Essentials

cc @dmlc/gluon-nlp-team

github-actions[bot] commented 3 years ago

The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1461/fix_benchmark/index.html

codecov[bot] commented 3 years ago

Codecov Report

Merging #1461 (1d554bd) into master (675b7c3) will not change coverage. The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #1461   +/-   ##
=======================================
  Coverage   85.80%   85.80%           
=======================================
  Files          52       52           
  Lines        6855     6855           
=======================================
  Hits         5882     5882           
  Misses        973      973           

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 675b7c3...1d554bd. Read the comment docs.