cltk / lat_models_cltk

Trained taggers, tokenizers, etc. for the CLTK
MIT License
8 stars 9 forks source link

Format for sentence tokenizers? #6

Open diyclassics opened 5 years ago

diyclassics commented 5 years ago

Comparing models to NLTK, it seems like it would be better to pickle sentence tokenizer models as type nltk.tokenize.punkt.PunktSentenceTokenizer objects as opposed to their current type of nltk.tokenize.punkt.PunktTrainer objects. Cf. language-specific files here: https://github.com/nltk/nltk_data/tree/gh-pages/packages/tokenizers

I've added an example of such a file here: https://github.com/cltk/latin_models_cltk/blob/master/tokenizers/sentence/latin_punkt.pickle

I think the 'trainer'-style pickle files should be deprecated and phased out; new code can refer to the 'tokenizer'-style pickle files in the short term and refactored when the former are officially removed.

Thoughts?

diyclassics commented 5 years ago

There might be an argument—again for a productive kind of parallelism in data structure with NLTK—to place this file in a directory ...tokenizers/punkt/latin.py (as opposed to .../tokenizers/sentence/latin.py).

kylepjohnson commented 5 years ago

be better to pickle sentence tokenizer models as type nltk.tokenize.punkt.PunktSentenceTokenizer objects as opposed to their current type of nltk.tokenize.punkt.PunktTrainer

Sounds fine to me. When I first wrote that, was I misunderstanding the NLTK API? Or has their API this evolved since then? I could look it up, but looks like you have the answer at your fingertips.

a productive kind of parallelism in data structure with NLTK

I am with you in general, however perhaps the name punkt has always rubbed me the wrong way. In NLP we split two things, words and sentences -- it's intuitive IMHO.