explosion / spacy-stanza

💥 Use the latest Stanza (StanfordNLP) research models directly in spaCy
MIT License
722 stars 60 forks source link

Spacy Tokenization encoding problem #68

Closed ZohaibRamzan closed 3 years ago

ZohaibRamzan commented 3 years ago

I am using spacy tokenizer while creating stanza pipeline. But during tokenization it does not handle expression like '1-2' properly. For example: when i tokenize the sentence: 'Soak the PVDF membrane in 100% methanol for 1‐2 minutes then rinse 2‐3 times with deionized water.  ' The result is: ["Soak","the","PVDF","membrane","in","100","%","methanol","for","1\u20102","minutes","then","rinse","2\u20103","times","with","deionized","water",".","\u00a0 \n"]

What to do to solve this issue?

polm commented 3 years ago

That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.

spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.

If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:

text = text.replace("\u2010", "-")
ZohaibRamzan commented 3 years ago

That's an interesting problem. For reference, it looks like your text is using the unicode hyphen and non-breaking space characters.

  • Unicode Character 'HYPHEN' (U+2010)
  • Unicode Character 'NO-BREAK SPACE' (U+00A0)

spaCy (and I guess Stanza) don't have any special treatment of these characters, which means they can end up being treated differently than their ASCII equivalents. If you're working with English text and don't have to worry about losing diacritics then maybe you can preprocess your text with unidecode.

If you need to Unicode characters in general but don't want the keep these, then I would recommend doing a simple string replace on your input text, like this:

text = text.replace("\u2010", "-")

You are right, i have used 'utf-8' encoding while reading the .txt file. The problem is not limited to one or two Unicode characters. I am working on bigger dataset and you can realize there would be many Unicode characters. Therefor, i need all of them to replace. In that case, could you help me further?

polm commented 3 years ago

OK, in that case maybe unidecode can help you. Is all your text in English? Is it OK if you strip all diacritics, so that "Erdős Pál" becomes "Erdos Pal"? If so then you can just do this:

# set up spaCy first
from unidecode import unidecode

text = ... # your text goes here
doc = nlp(unidecode(text))

If that's not OK, you'll need to describe your data in more detail.

ZohaibRamzan commented 3 years ago

My complete dataset is in English. You can have a look on dataset for more clarity. https://github.com/chaitanya2334/WLP-Dataset

polm commented 3 years ago

Thanks for the link! It's much easier to give advice when the data is open like this.

Here are some example sentences:

Add 250 µl PB2 Lysis Buffer. Centrifuge for 5 min at 11,000 x g at room temperature. HB101 or strains of the JM series), perform a wash step with 500 µl PB4 Wash Buffer pre-warmed to 50°C.

Unfortunately it looks like the data has unicode characters without clear ASCII equivalents. For example, unidecode would convert µl to ul, or 50°C to 50degC. That might actually be OK, since ul isn't otherwise a word, but you'd have to be careful, and it might make your output hard to understand in some cases.

Based on the sample data I've seen, while there are a number of unicode characters, only a few like the hyphen or space would actually cause strange behavior in spaCy's tokenizer. Given that, I would first try making a list of characters and replacing them in preprocessing, and if that doesn't work, then try unidecode. If neither of those work what I'd do next would depend on what the problem was.

ZohaibRamzan commented 3 years ago

Thank you for showing some impacts. This is helpful.