bab2min / tomotopy

Python package of Tomoto, the Topic Modeling Tool
https://bab2min.github.io/tomotopy
MIT License
548 stars 62 forks source link

The training is really fast, but the inference speed is very slow. I read the document and wrote batch, multi-core, but it is still very slow. Is there any other way to optimize the inference speed? #205

Open xiaohuzi1996 opened 1 year ago

xiaohuzi1996 commented 1 year ago

The problem encountered is the same, occupying 100G of memory, 40 cores are turned on, and reasoning is performed on texts with a length of less than 5000 words, 2 entries/s

narayanacharya6 commented 10 months ago

I've had a similar experience. Using the DMRModel I get only around 20docs/minute or so on my MBP 2.6 GHz 6-Core Intel Core i7, 32 GB RAM.