urchade / GLiNER

Generalist and Lightweight Model for Named Entity Recognition (Extract any entity types from texts) @ NAACL 2024
https://arxiv.org/abs/2311.08526
Apache License 2.0
1.48k stars 127 forks source link

Chunking for the 384 words limit #82

Closed rjalexa closed 5 months ago

rjalexa commented 6 months ago

What is the best way to chunk longer texts so each chunk fits under the 384 words (or 512 subtokens) ? My articles on the average are around 1200 tokens / 5000 chars approx Thank you

urchade commented 6 months ago

Hi, I think that gliner-spacy (https://github.com/theirstory/gliner-spacy?ref=bramadams.dev) integrate a chunking function

Cc @wjbmattingly

wjbmattingly commented 6 months ago

Hi all. Yes, Gliner spaCy handles the chunking for you. I kept it as an argument so that as the GliNER model improves (and can handle larger inputs), the package won't need to be updated.

rjalexa commented 6 months ago

Thank you

abedit commented 6 months ago

On that note, is it possible to use GLiNER SpaCy's chunking for finetuning GLiNER, Specifically the urchade/gliner_multi_pii-v1 model? I'm also dealing with large data.

wjbmattingly commented 6 months ago

I believe there are a few of us working on gliner finetuning packages. I have one that's not ready yet, but I believe @urchade has made progress and has a few notebooks in this repository to get you started. In all these cases, you could use gliner spacy to help with the annotation process in something like Prodigy, from ExplosionAI. It's primarily what I use for annotating textual data because it works so easily with spaCy. You would then need to modify the output to align with the gliner finetuning approach. This is actually exactly what we did for the Placing the Holocaust project. You can see our GliNER finetuned model here: https://huggingface.co/placingholocaust/gliner_small-v2.1-holocaust