PonteIneptique / latin-lasla-models

Repository for LASLA Latin models
Mozilla Public License 2.0
4 stars 0 forks source link

The preprocessing pipeline used for training #8

Open alexeyev opened 1 year ago

alexeyev commented 1 year ago

Dear colleague,

thank you for your work!

May I wonder -- what is the right way to use the lemmatizer/PoS tagger? Which pie tokenizer or other preprocessing steps should be used (for best quality)?

Here's my minimal working example. Is this exactly the same pipeline you have used on training and evaluation stages?

# coding: utf-8

from pie.tagger import Tagger
from pie.tagger import simple_tokenizer
from pie.utils import model_spec

device, batch_size, model_file = "cpu", 4, "../models/lasla-plus-lemma.tar"
data = "Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. " \
       "Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."

tagger = Tagger(device=device, batch_size=batch_size)

for model, tasks in model_spec(model_file):
    tagger.add_model(model, *tasks)

sents, lengths = [], []

for sentence in simple_tokenizer(data):
    sents.append(sentence)
    lengths.append(len(sentence))

tagged, tasks = tagger.tag(sents=sents, lengths=lengths)

print("Tagged:", tagged)
print("Tasks:", tasks)

Thank you in advance.

Best regards, Anton.

PonteIneptique commented 1 year ago

Hi!just a quick answer to show that I check the issues. I'll get back to you asap

Le ven. 11 nov. 2022 à 1:43 PM, Anton Alekseev @.***> a écrit :

Dear colleague,

thank you for your work!

May I wonder -- what is the right way to use the lemmatizer/PoS tagger? Which pie tokenizer or other preprocessing steps should be used (for best quality)?

Here's my minimal working example. Is this exactly the same pipeline you have used on training and evaluation stages?

coding: utf-8

from pie.tagger import Taggerfrom pie.tagger import simple_tokenizerfrom pie.utils import model_spec device, batch_size, model_file = "cpu", 4, "../models/lasla-plus-lemma.tar"data = "Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. " \ "Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum." tagger = Tagger(device=device, batch_size=batch_size) for model, tasks in model_spec(model_file): tagger.add_model(model, *tasks) sents, lengths = [], [] for sentence in simple_tokenizer(data): sents.append(sentence) lengths.append(len(sentence)) tagged, tasks = tagger.tag(sents=sents, lengths=lengths) print("Tagged:", tagged)print("Tasks:", tasks)

Thank you in advance.

Best regards, Anton.

— Reply to this email directly, view it on GitHub https://github.com/PonteIneptique/latin-lasla-models/issues/8, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOXEZVPWXFSMQYKDIUKRY3WHY5OBANCNFSM6AAAAAAR5SD7ZY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

PonteIneptique commented 1 year ago

If you just wish to tag, your best shot is https://github.com/hipster-philology/nlp-pie-taggers where I introduced all of the preprocessing AND the post-processing (specifically for enclitics like -que, -ve, -ne)

Preprocessing (on the top of my head):

alexeyev commented 1 year ago

Hi, thank you for the swift response!

I'm afraid I am going to abuse your kindness once again and ask a few more questions a bit later -- after I take a closer look at nlp-pie-taggers. Thanks!

PonteIneptique commented 1 year ago

Sure, feel free !