CopticScriptorium / coptic-nlp

Coptic NLP pipeline page and utilities
Apache License 2.0
14 stars 5 forks source link

NLP tools and epsilons for supralinear strokes #16

Open ctschroeder opened 5 years ago

ctschroeder commented 5 years ago

Some documents we are working with have an epsilon instead of a supralinear stroke. This is a common thing. Sometimes scholars argue these differences are due to dialects; I don't want to get into that debate. But we are seeing it even in the Shenoute WM Sahidic mss @bkrawiec is working on. Especially en for PREP n at the beginning of a bound group. Anything we should be doing to address this in the NLP pipeline? Anything other than feeding @bkrawiec's material back into the tool as training data when we are done?

amir-zeldes commented 5 years ago

This is actually quite tricky, since initial e- before n can be a variety of other things, so often it should be split off. I just ran a little anecdotal test and the news is... very bad:

ⲡⲗⲏⲛ_ⲅⲁⲣ_ϥⲥⲱⲧⲉⲙ_ⲉ|ⲛϭⲓⲡⲁ|ⲥⲟⲛ_ϫⲉ|ⲁ|ϥ|ⲛⲁⲩ_ⲉ|ⲛⲉ|ⲥϩⲓⲙⲉ_ⲉ|ⲛ|ⲛⲓ|ⲣⲱⲙⲉ_ⲉⲃⲟⲗ_ϫⲉ|ⲉ|ⲛ|ϥ|ϫⲱⲕ_ⲉⲃⲟⲗ_ⲁⲛ

This is just random words, so maybe realistic contexts would work better, but some of these should be easy to spot (sOtem, enqi), while others are very hard (e|n|ni is pretty reasonable, je|e|n|f|VERB). Currently the tokenizer is trained on normalized data and has learned to be quite conservative: it expects the normalizer to do its job, and from a couple of tests I'm running, it looks like it's particularly reluctant to assume there are verbs it doesn't know. It's more relaxed about nouns, probably because it's been surprised by them more often in the training data.

@lgessler : this is part of the challenge for this summer, and is a kind of chicken and egg problem. The tokenizer wants normalized data, but normalization depends on tokenization. Moving the tokenizer itself to seq2seq will not be easy, since the current architecture is very hard to beat for accuracy IF the data is already normalized. We'll need to get some numbers on normalization accuracy before we can determine if that's a good direction to pursue.

Thanks for pointing this out, and @bkrawiec , if you have some recurring errors that are high frequency, we can try to feed those in as training data already after you've done just a couple of pages, so that accuracy can rise for the remaining work. Fundamentally though, this is exactly what we're meant to be thinking about in this project phase.

bkrawiec commented 5 years ago

@amir-zeldes I know this was a recurring issue for GF 253-256 for Shenoute.those. I think I have corrected most of them. Do you want me to leave the recurring errors in or correct them to use as training data?

For example there was a line, "worshipping demonic idols" (Brakke/Crislip), that got automatically tokenized as: ⲉ|ⲩ|ⲟⲩⲱϣⲧ_ⲉ|ⲛⲉ|ⲛ|ϩⲉⲓⲕⲱⲛ_ⲛ|ⲇⲁⲓⲙⲟⲛⲉⲓⲟⲛ

I decided that was two ⲛ's, each with an ⲉ in place of a superlinear stroke. I could be wrong, but nothing else occurred to me. I changed the tok's to this: _ⲉ|ⲩ|ⲟⲩⲱϣⲧ_ⲉⲛ|ⲉⲛ|ϩⲉⲓ̈ⲕⲱⲛ_ⲛ̄|ⲇⲁⲓ̈ⲙⲟⲛⲉⲓ̈ⲟⲛ

It's on my list of things to ask about because I am not 100% sure here. If you agree, my question is which do you need for training data?

amir-zeldes commented 5 years ago

Thanks for the example, that looks right in context, though it does seem odd you have one word initial 'en' followed by a plain 'n' for 'ndaimoneion'. From the perspective of the tool, this could have been "worshipping while icons of demons were (doing something else)', so e|ne|n could have been CCIRC|CPRET|ART - not a totally unreasonable analysis.

What's more, because our training data is limited, the tokenizer only considers tri-skip-gram context, meaning it looks one group forward and one group back when deciding on a split, so all it sees is "euouOSt enenheikOn ndaiomeion", and it can't know if a verb is about to follow (which speaks for the incorrect analysis above) or not.

Either way, what we need for training data is what we already do in GitDox - norm vs. orig. Currently that mapping is lookup and rule based, but part of our plan for the summer is to make that stochastic, just like the tokenizer itself. As long as you keep enen... in the orig layer, we have everything we need!