alumae / kaldi-gstreamer-server

Real-time full-duplex speech recognition server, based on the Kaldi toolkit and the GStreamer framwork.
BSD 2-Clause "Simplified" License
1.07k stars 341 forks source link

workers RAM consumption #74

Closed prokopevaleksey closed 7 years ago

prokopevaleksey commented 7 years ago

Thank you a lot for this awesome project!

However I have a minor concern about the efficiency of RAM usage for workers. As far as I understood each worker load the model into the memory for itself each time evoked.

I am just wondering is there an easy way to load the model into RAM once and make workers share it?

Thank you.

alumae commented 7 years ago

No, there is no easy way to do it.

Tsaukpaetra commented 6 years ago

On that note, is it possible to use memory-mapped file handling instead of loading into RAM at all (as an option)? Or does performance degrade so much that keeping everything in RAM is mandatory?