meilisearch / charabia

Library used by Meilisearch to tokenize queries and documents
MIT License
248 stars 86 forks source link

Tokenization of japan text with disabled default features #229

Open generall opened 1 year ago

generall commented 1 year ago

Hi!

We are trying to integrate Charabia in here: https://github.com/qdrant/qdrant/pull/2260 Our big concern is binary size, that's why we are trying to use it with disabled dictionaries for Japanese, Korean and Chinese.

Version 7.2 seemed to have a default behavior of splitting text per-character in this case:

本日の日付は -> ["本", "日", "の", "日", "付", "は"]

which was fine for our purposes. New version, however, doesn't do that anymore:

本日の日付は -> ["本日の日付は"]

I wonder if it is an intended behavior change and is it possible to configure segmenter to behave in a way it worked before?

ManyTheFish commented 1 year ago

So far it's not possible to split CJK characters like you want, however, a new segmenter could be implemented to do the job and would be activated with a feature flag. If you want to do a PR, I would agree to merge it. 😃

XshubhamX commented 5 months ago

Can I work on this @ManyTheFish

curquiza commented 5 months ago

Hello @XshubhamX

Thanks for your interest in this project 🔥 You are definitely more than welcome to open a PR for this!

For your information, we prefer not assigning people to our issues because sometimes people ask to be assigned and never come back, which discourages the volunteer contributors from opening a PR to fix this issue. We will accept and merge the first PR that fixes correctly and well implements the issue following our contributing guidelines.

We are looking forward to reviewing your PR 😊