pemistahl / lingua-py

The most accurate natural language detection library for Python, suitable for short text and mixed-language text
Apache License 2.0
1.11k stars 44 forks source link

Reduce memory usage? #50

Closed osma closed 2 years ago

osma commented 2 years ago

Naive (potential) user question here. I'm looking for a good, up to date language detection library for Annif - see this issue. Lingua seems promising, but it seems to require quite a lot of memory, especially when all supported languages are considered - this is pointed out in the README. I tested detecting the language of the example sentence "languages are awesome" and it required 1.8GB of memory. When I chose to preload all models, this increased to 2.6GB.

I tested doing the same with pycld3 and langdetect and their memory usage was much much lower - too little to bother measuring accurately. I don't see anything in the README that would justify using such huge amounts of RAM compared to other implementations. Having the rules is certainly good, but I don't think they use lots of RAM.

I'm wondering if there's some trick that other language detection libraries are performing to reduce their memory requirements? Could Lingua do that too? Or is this just a tradeoff that you have to accept if you want to achieve the high accuracy? For my purposes, although it's nice to have good accuracy, this isn't a top priority. It would also help to be able to choose smaller and faster models with slightly reduced accuracy.