ropensci / tokenizers

Fast, Consistent Tokenization of Natural Language Text
https://docs.ropensci.org/tokenizers
Other
184 stars 25 forks source link

Add GitHub link #69

Closed maelle closed 6 years ago

maelle commented 6 years ago

for easier parsing by codemetar for the registry

codecov-io commented 6 years ago

Codecov Report

Merging #69 into master will decrease coverage by 1.2%. The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #69      +/-   ##
==========================================
- Coverage   99.31%   98.11%   -1.21%     
==========================================
  Files          12       12              
  Lines         441      425      -16     
==========================================
- Hits          438      417      -21     
- Misses          3        8       +5
Impacted Files Coverage Δ
R/character-shingles-tokenizers.R 90.47% <0%> (-9.53%) :arrow_down:
R/ngram-tokenizers.R 95.55% <0%> (-4.45%) :arrow_down:
R/tokenize_tweets.R 97.56% <0%> (-2.44%) :arrow_down:
R/coercion.R 91.66% <0%> (-1.2%) :arrow_down:
R/utils.R 93.75% <0%> (-0.37%) :arrow_down:
R/ptb-tokenizer.R 100% <0%> (ø) :arrow_up:
R/chunk-text.R 100% <0%> (ø) :arrow_up:
R/basic-tokenizers.R 100% <0%> (ø) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update d322875...0acd113. Read the comment docs.