Closed chriskyndrid closed 1 year ago
And where do you see the leak here? I run this code, memory usage remains stable about 5Gb with en-us-0.22 model.
On the screenshot there is second const arpa raw in leaked column for example, it is not leaked, simply model memory. It should be like that.
First row is for RNNLM.
I accidentally posted an unsorted screenshot of the Bottom-Up view(it was sorted on Peak). The accept_waveform call was more interesting to me, but, it seems to stabilize around 40MB reported as a potential leak, regardless of duration I run. I'm using the vosk-model-en-us-0.42-gigaspeech model.
I do see a rise on my machine to about 10.3GB over period of 10 minutes with the sample program, after that it seems to stabilize in usage and I don't see significant changes with Heaptrack, or otherwise.
So I think I'm wrong, and my initial observations don't represent any issue as I believed. It's likely in my main program another library, like gstreamer, is the culprit. Thank you for your time and feedback, and I apologize for any wasted time on the issue.
I'll go ahead and close it.
@chriskyndrid Ok, thank you for your report anyway, let us know how it goes
@nshmyrev, Per your request I'm opening another issue regarding the (presumed) memory leak referenced here. The rust bindings crate in use can be found here. Here is a sample Rust program that should reliably reproduce the issue:
1) Your mains.rs:
Note the sample I used was in 44100 mono, hence the conversion.
2) In your Cargo.toml
3) You will need a directory with the model and the libvosk.so and vosk_api.h. In my case, include/libs and include/model
4) Build the program via:
5) Run the program via:
This sample program will run recognition in parallel using the Rayon crate, so the model will be shared across many threads. Each thread will fire up and create it's own Recognizer.