Closed xinyual closed 3 years ago
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1500/9853cec68db6daac2c5697860349aba841d8196a/index.html
Merging #1500 (8d99c52) into master (f44ec96) will increase coverage by
0.10%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## master #1500 +/- ##
==========================================
+ Coverage 86.35% 86.46% +0.10%
==========================================
Files 54 55 +1
Lines 7388 7741 +353
==========================================
+ Hits 6380 6693 +313
- Misses 1008 1048 +40
Impacted Files | Coverage Δ | |
---|---|---|
src/gluonnlp/data/loading.py | 78.11% <0.00%> (-5.29%) |
:arrow_down: |
src/gluonnlp/models/t5.py | 92.74% <0.00%> (-0.92%) |
:arrow_down: |
src/gluonnlp/models/transformer.py | 98.32% <0.00%> (-0.21%) |
:arrow_down: |
src/gluonnlp/models/__init__.py | 100.00% <0.00%> (ø) |
|
src/gluonnlp/models/mt5.py | 95.49% <0.00%> (ø) |
|
src/gluonnlp/data/tokenizers/huggingface.py | 71.83% <0.00%> (+0.28%) |
:arrow_up: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update f44ec96...8d99c52. Read the comment docs.
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1500/5507d1c290546394a540eb5409651564ea2fe0b8/index.html
Please resolve the conflicts
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1500/9f4db60ae6a3cd7b0f1863d5c862db8b8a2b2bdb/index.html
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1500/e9acc98824a7af927eb6378d0cafb3786c201806/index.html
The documentation website for preview: http://gluon-nlp-staging.s3-accelerate.dualstack.amazonaws.com/PR1500/8d99c52d54eccc8878cebe2a20f5fc3ca2563f4a/index.html
Description
(Brief description on what this PR is about)
Checklist
Essentials
Changes
Comments
cc @dmlc/gluon-nlp-team