Open AndreaFrancis opened 1 year ago
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
P2 label should have prevented the stale bot from closing the issue. Fixed with #1635.
https://datasets-server.huggingface.co/search?dataset=HeshamHaroon%2FQA_Arabic&config=HeshamHaroon--QA_Arabic&split=train&query=%D9%85%D9%86&offset=0&limit=100 returns no result, while it should (query=من)
I tried with Arabic and Russian stemmer as in the Duckdb doc but I wasn't able to perform a simple query using FTS. I posted an issue here https://github.com/duckdb/duckdb/issues/10254
https://github.com/duckdb/duckdb/issues/10254 has been fixed, but I think we will need to solve https://github.com/huggingface/datasets-server/issues/1914 and find of a way to not break search when updating duckdb version.
good reaction time from the duckdb team!
https://pypi.org/project/duckdb/0.9.3.dev2934/ pre-release looks to have fixed FTS for non ascii characters, is this a version we can currently use? or should we wait for an official release?
let's try, I would say
https://github.com/huggingface/dataset-viewer/pull/2928 will add a specific stemmer for a dataset only if it is marked as monolingual. (That is, only one language for all splits). But there are some caveats as:
Starting with the monolingual sounds like the best idea, since as you explained it can be quite complex to handle multilingual datasets. The list of 26 is a good start already, and we can surely fallback on the porter stemmer.
For multilingual datasets, ideally duckdb could allow using multiple stemmers somehow ? Let's see with them I guess
To extend the list of supported languages: I've found the idea of using proxy tokenizers for languages that don't have dedicated tokenizers - for them one can use tokenizers of the closest related languages (if they have the same writing system). For example, there is a list in datatrove lib: https://github.com/huggingface/datatrove/blob/898efc0fc6ee2050f8ef78f7236cace2b26f2824/src/datatrove/utils/word_tokenizers.py#L297 (I'm questioning some choices there :D but the idea is nice).
Maybe we can use the same idea for stemmers?
maybe close this issue since it's basically done now with #2928, and open new issues if we want to support multilingual datasets, have more than 26 stemmers, or other improvements as proposed by Polina?
Currently, '
porter
' stemmer is used by default for duckdb indexing here https://github.com/huggingface/datasets-server/pull/1296/files#diff-d9a2c828d7feca3b7f9e332e040ef861e842a16d18276b356461d2aa34396a8aR145 See https://duckdb.org/docs/extensions/full_text_search.html for more details about 'stemmer
' parameter. In the future, we could try to identify the dataset language and use an appropriate stemmer parameter when creating thefts
index.