tsproisl / SoMaJo

A tokenizer and sentence splitter for German and English web and social media texts.
GNU General Public License v3.0
135 stars 21 forks source link

tokenize with continue multiple punctions #19

Open liutianling opened 3 years ago

liutianling commented 3 years ago

BE kem pertama dalam Bahasa Melayu. 350 Pax Pemimpin daripada Malaysia, Singapura, Brunei Dan Indonesia!!! Marilah kita membawa gelombang #BEInternational ke Pasaran Melayu!!!🔥🔥🔥🔥🔥

with the text above, i got a result: [['BE', 'kem', 'pertama', 'dalam', 'Bahasa', 'Melayu', '.'], ['350', 'Pax', 'Pemimpin', 'daripada', 'Malaysia', ',', 'Singapura', ',', 'Brunei', 'Dan', 'Indonesia', '!'], ['!'], ['!'], ['Marilah', 'kita', 'membawa', 'gelombang', '#', 'BEInternational', 'ke', 'Pasaran', 'Melayu', '!'], ['!'], ['!'], ['🔥', '🔥', '🔥', '🔥', '🔥']]

I want get the "!!!" as total . Thanks!

tsproisl commented 3 years ago

Wow, I didn't expect SoMaJo to be useful for Malaysian!

Unfortunately, I am not able to reproduce the problem. I get the following output which looks fine:

[['BE', 'kem', 'pertama', 'dalam', 'Bahasa', 'Melayu', '.'], ['350', 'Pax', 'Pemimpin', 'daripada', 'Malaysia', ',', 'Singapura', ',', 'Brunei', 'Dan', 'Indonesia', '!!!'], ['Marilah', 'kita', 'membawa', 'gelombang', '#BEInternational', 'ke', 'Pasaran', 'Melayu', '!!!', '🔥', '🔥', '🔥', '🔥', '🔥']]

How did you run the tokenizer? I tested it like this ("en_PTB" and "de_CMC" give the same results on this input – I don't know which one would be more appropriate for Malaysian):

from somajo import SoMaJo

tokenizer = SoMaJo("en_PTB")
paragraphs = ["BE kem pertama dalam Bahasa Melayu. 350 Pax Pemimpin daripada Malaysia, Singapura, Brunei Dan Indonesia!!! Marilah kita membawa gelombang #BEInternational ke Pasaran Melayu!!!🔥🔥🔥🔥🔥"]
sentences = tokenizer.tokenize_text(paragraphs)
print([[token.text for token in s] for s in sentences])