explosion / spacy-stanza

💥 Use the latest Stanza (StanfordNLP) research models directly in spaCy
MIT License
726 stars 60 forks source link

SPACE is not UPOS #65

Open bitPogo opened 3 years ago

bitPogo commented 3 years ago

Hey, First of all thanks for the great job! I am currently using stanza via spaCy for an small annotation projection project. However while integrating I realized that spacy-stanza uses an custom Universal POS tag. I guess its a bit against the idea of Universal POS tags and it makes my life harder since I need another run to filter those tags out. My questions are now: Is there any reason why this wrapper does not filter them out? Is there any possible solution/workaround/filter possible to overcome this? Thanks for your time!

adrianeboyd commented 3 years ago

We're adding them in, not filtering them out. stanza itself doesn't return any annotation for whitespace, or if you feed it whitespace-only tokens I think you get nonsense back, like NOUN because the models aren't trained to handle whitespace tokens.

The way a spaCy Doc is stored underneath, the text is just the sum of the individual token texts, so you need some way to represent every bit of the original text including whitespace. Anything beyond a single trailing space is turned into a separate whitespace token. We could also potentially use X, but spacy has been using _SP for token.tag and SPACE for token.pos since the earliest versions of the library, so it makes sense for this wrapper to behave the same way. I agree that it goes against the idea of UPOS, but it looks it was added because there were cases where whitespace vs. not-whitespace was a useful distinction.

It should be easy to write a custom pipeline component to convert SPACE to X if you'd like, and once we've released a v3-compatible version (coming very soon!), there you can use the attribute_ruler to do this.

bitPogo commented 3 years ago

Thanks for the quick answer and explanation! I already assumed its more depend on spaCy than on the mind model behind UPOS. However converting them into X taints the complete annotation which makes a annotation projection not really feasible, since not all tokenizer produce space tokens[1]. (That's the reason why stanza make crap out of blank lines). Also blanks/spaces are not really part of speech at all (they have no syntactic meaning).

Writing an additional component is actually not my problem (I already done it). Its more iterating once more over an corpus again, which slows down the whole annotation part.

At least it would be really nice, if you could make consumer of this package aware of this behavior. It might seems obvious to people using spaCy on more daily bases, but not for people like me, which are assuming it sticks to standards. Also it should be recognized for spaCy itself since the linkage for Token#pos_ is misleading and causing bugs like in my case.

[1]: Before you ask. Since I deal during a projection with at least 2 corpora and I cannot assume that during the projection all components using the exact same tokenizer, so I need to stick to standards like UPOS rather then concrete implementations like spaCy.

adrianeboyd commented 3 years ago

Yes, the spaCy docs could be improved here.

If you don't want to convert SPACE to a valid UPOS tag, I'm not sure what kind of answer you're looking for? A spaCy Doc is going to include these whitespace tokens if there's whitespace beyond a single trailing space in the input, so if you don't want any space tokens in a Doc, the only option is to preprocess the texts to collapse contiguous whitespace to a single space.

stanza's Document from a plain stanza pipeline might be more suitable for you than spaCy's Doc?

bitPogo commented 3 years ago

Thanks again for the quick answer.

Plain stanza is currently not an option for this iteration, since this requires a lot of changes to my project, which I cannot effort to do at the moment due time pressure.

The answer I am seeking is more like - "hey we already know that and we are planning to go around this with ...". But it's also okay, if the answer is: "Oh, we make that by intend.", which it looks like to be.

However I would be very grateful, if this will addressed in the docs - it costs me several hours to figure this out and I will probably not the last person, which stumbles over that.

Anyways, thanks for the help.