chengchingwen / Transformers.jl

Julia Implementation of Transformer models
MIT License
526 stars 75 forks source link

Adapting TextEncodeBase #88

Closed chengchingwen closed 2 years ago

chengchingwen commented 2 years ago

Starting implementation for #83

TODOs:

EDIT: I'm going to split the gpt part to another PR to facilitate the process

codecov[bot] commented 2 years ago

Codecov Report

Merging #88 (e6f1ecd) into master (7dbd347) will increase coverage by 1.54%. The diff coverage is 59.89%.

@@            Coverage Diff             @@
##           master      #88      +/-   ##
==========================================
+ Coverage   20.42%   21.97%   +1.54%     
==========================================
  Files          65       69       +4     
  Lines        3265     3381     +116     
==========================================
+ Hits          667      743      +76     
- Misses       2598     2638      +40     
Impacted Files Coverage Δ
src/Transformers.jl 0.00% <0.00%> (-28.58%) :arrow_down:
src/basic/embeds/etype.jl 0.00% <0.00%> (ø)
src/basic/loss.jl 0.00% <0.00%> (ø)
src/bert/tfckpt2bson.jl 0.00% <0.00%> (ø)
src/cuda/cuda.jl 0.00% <0.00%> (ø)
src/datasets/dataset.jl 0.00% <0.00%> (ø)
src/datasets/translate/iwslt2016.jl 11.11% <0.00%> (ø)
src/basic/embeds/textencoder.jl 55.26% <55.26%> (ø)
src/bert/textencoder.jl 65.78% <65.78%> (ø)
src/basic/embeds/vocab.jl 86.66% <79.16%> (+2.18%) :arrow_up:
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 7dbd347...e6f1ecd. Read the comment docs.