Open pemistahl opened 2 years ago
Which files ? if you require the processing in Python or in JavaScript(Node) I can work on a Google proto buffer format; quite sure the persisted model would be way lighter, maybe the processing would be fast, I do not know. Any way, I'm glad to help. I'm happy that you provide a JS binding as well, I'm looking for a fast language detection runnable on Node. Thanks
I know this is half road, as you were asking for a better structure to gain processing time. But for big model on memory here is a solution:
I changed the format a little bit from regular Map<string: string>
to Map<number[]: string[]>
. I guess you treat as so anyway, so hopefully not a problem.
Here is a working example in JavaScript/Node: https://github.com/bacloud23/lingua-rs-bigrams
So here how it goes:
ngrams
key iteratively and not cumulatively (I guess so), you can (I guess) load one Pair
at a time inside a loop. I think it comes with a processing cost though (again if even possible).Drawback: new protobufjs dependency.
@ghost: By how much your solution reduces the binary size?
Currently, the language models are parsed from json files and loaded into simple maps at runtime. Even though accessing the maps is pretty fast, they consume a significant amount of memory. The goal is to investigate whether there are more suitable data structures available that require less storage space in memory, something like NumPy for Python.
One promising candidate could be ndarray.