Ekphrasis is a text processing tool, geared towards text from social networks, such as Twitter or Facebook. Ekphrasis performs tokenization, word normalization, word segmentation (for splitting hashtags) and spell correction, using word statistics from 2 big corpora (english Wikipedia, twitter - 330mil english tweets).
MIT License
660
stars
91
forks
source link
Warning regarding using TextPreProcessor as a preprocessing for torchtext.data.Field() #7
As it can be seen in the code sample below, we get different results if
we pre-process a text with TextPreProcessor _text_processor_ and then create an Example with a torchtext.data.Field() without a preprocessing Pipeline vs
creating an Example with the text as it is, with a torchtext.data.Field(preprocessing=data.Pipeline(lambda x : _text_processor_(x))
Using the Field preprocessing pipeline, the _text_processor will be called on a token level, instead of on a sentence-level, and expressions like "October 10th" that will be converted to <date>, will not be correctly converted, as the text_processor_ will be called on two separate tokens "October" and "10th", and the last one will be break into "1 0 th" after that call.
from torchtext import data, vocab
import torch.optim as optim
import torch.nn.functional as F
import torch.nn as nn
from ekphrasis.classes.preprocessor import TextPreProcessor
from ekphrasis.classes.tokenizer import SocialTokenizer
from ekphrasis.dicts.emoticons import emoticons
text_processor = TextPreProcessor(
# terms that will be normalized
normalize=['url', 'email', 'percent', 'money', 'phone', 'user',
'time', 'url', 'date', 'number'],
# terms that will be annotated
annotate={"hashtag", "allcaps", "elongated", "repeated",
'emphasis', 'censored'},
fix_html=True, # fix HTML tokens
# corpus from which the word statistics are going to be used
# for word segmentation
segmenter="twitter",
# corpus from which the word statistics are going to be used
# for spell correction
corrector="twitter",
unpack_hashtags=True, # perform word segmentation on hashtags
unpack_contractions=True, # Unpack contractions (can't -> can not)
spell_correct_elong=False, # spell correction for elongated words
# select a tokenizer. You can use SocialTokenizer, or pass your own
# the tokenizer, should take as input a string and return a list of tokens
tokenizer=SocialTokenizer(lowercase=True).tokenize,
# list of dictionaries, for replacing tokens extracted from the text,
# with other expressions. You can pass more than one dictionaries.
dicts=[emoticons]
)
As it can be seen in the code sample below, we get different results if
Using the Field preprocessing pipeline, the _text_processor will be called on a token level, instead of on a sentence-level, and expressions like "October 10th" that will be converted to <date>, will not be correctly converted, as the text_processor_ will be called on two separate tokens "October" and "10th", and the last one will be break into "1 0 th" after that call.
Reading twitter - 1grams ... Reading twitter - 2grams ... Reading twitter - 1grams ...