Our tokenizer takes a raw text and splits tokens by their morphological aspects. It also groups tokens into sentences. Our tokenizer is based on the LDC tokenizer used for creating English Treebanks although it uses more robust heuristics. Here are some key features about our tokenizer.
:-)
, ^_^
).emory.edu
, jinho@emory.edu
, index.html
).0.1
, 2/3
).---
, ...
).Prof.
, Ph.D
).clearnlp.zip
, tokenizer.doc
).1 kg
, 2 cm
).jinho.choi
).TokenizerDemo
shows how the tokenizer can be used in APIs.