rkcosmos / deepcut

A Thai word tokenization library using Deep Neural Network
MIT License
420 stars 96 forks source link

Custom words dictionary #73

Open venzen opened 2 years ago

venzen commented 2 years ago

Thank you for this good work. I have two questions about using this tool. First let me briefly explain my use case:

I am translating Buddhist texts from Thai to English for the Mahachulalangkornraachawitayaalay (MCU). The source material is images, so I must first do OCR (with tesseract) and then edit to markdown format. After that I can translate to English using Google Translate. During OCR some characters and annotations are missed or misinterpreted. I hope that deepcut can allow me to correct those words that are misrepresented by OCR. For example, the correct word is 'ประจําบท' but OCR misses the sara am and returns 'ประจาบท'.

  1. Can deepcut help in this case?
  2. If there are new or unseen words in the text, how can I add these words to deepcut for identification in the future?
titipata commented 2 years ago

Hi @venzen I can confirm that deepcut cannot do the text correction or spelling correction. You may have to create a function (after OCR) to deal with these corrections before performing the tokenization.

For the second question, I think deepcut should generalized enough to parse new unseen words. However, you can put list of words in the dictionary to cover some cases that you think that it may tokenize wrongly.

Hope this help a bit! Maybe someone can follow this issue too!

titipata commented 2 years ago

There was a paper for spelling correction by Ekapol and team which may help you quite a bit: https://ieeexplore.ieee.org/document/9145483. I'm not sure if they provide open-source implementation somewhere.

venzen commented 2 years ago

@titipata Thank you for your response. I started implementing word similarity checking before I saw your reply. Can use difftool.SequenceMatcher() with a text file of Thai words and find correct spellings.

As you recommend, I then add new words to this file and add it as a deepcut custom dictionary. For example deepcut was tokenizing 'ความนํา' as ['ความ', 'นํา'] and after adding the word to the custom dictionary it is correctly tokenized as ['ความนํา'].

Thank you, also, for sending the link to the paper about Thai spelling correction. I will read and give feedback if I find a solution.

venzen commented 2 years ago

Unexpected behavior from deepcut: I am passing a custom dictionary that contains both the words 'หรือ' and 'อิริยาบถ'. Each word is a separate entry on its own line and without whitespace. There is a newline ('\n') after each entry. So we would expect deepcut will tokenize each word, correct?

The string is: หรืออิริยาบถน้อย มีคู้เข้า เหยียดออก....

deepcut fails to segment and returns a single list item for 'หรืออิริยาบถ':

['หรืออิริยาบถ', 'น้อย', ' ', 'มี', ... ]

Any idea what is happening in this case?

EDIT: I should add that my custom dictionary includes 19,000 Thai words. Some of them are compound words. Perhaps this is causing strange behavior.

venzen commented 2 years ago

The issue was that the custom dictionary contained duplicate words (words also present in the deepcut dictionary). When I made a new blank custom dictionary deepcut works as expected.