Closed eagomez2 closed 1 year ago
Thanks for reporting this!
It would be great if you could reproduce this without the torch
dependency.
There have been some memory-related issues in the past, maybe there is some useful information: #58, #140 (only Windows?), #158
Hi @mgeier ,
Thanks for the prompt reply. Unfortunately I am not sure how to reproduce this without torch
as dependency. This is why in the first place I though the issue was the model itself, until I tested it separately.
I saw the issues you mentioned but couldn't find anything to solve this. The only additional detail I can think of is that the sound output of the model sounds as expected without any dropouts as the memory usage increases. Please let me know if there are any additional tests that I could try in order to find out further details.
I don't really have experience with debugging memory problems, but quite some time ago I read about https://pympler.readthedocs.io/, which I've never used, but it sounds like it could help?
Here are a few more random links about memory profiling, I haven't tried any of this:
Hi @mgeier ,
Thanks a lot! I haven't had time yet to check what you sent, but I'll keep you posted whenever I could perform some tests.
Hi @mgeier ,
I was spending some time setting up pympler and when I about to try, I realised that there is no leak anymore. I am not sure what fixed it, but I am getting new deprecation warnings from pytorch
that I don't remember having before. I know I have updated some modules during these days, so I guess some dependency update actually fixed it, but I have no scientific proof to track down the origin of the original leak, unfortunately.
Here are the libraries that have been changed compared to the first post:
Operating system: macOS Monterey 12.6 (same) python version: 3.8.13 (same) sounddevice version: 0.4.5 (same) torch version: 1.12.1 (updated) numpy version: 1.22.0 (downgraded)
I think this issue can be considered solved for now, but I'll keep you posted if by any chance I find something new about it.
Thanks for the update!
I'll close this for now, but we can re-open it whenever new information comes to light.
For future reference, another memory profiler: https://github.com/pythonspeed/filprofiler
Hi,
I am trying to run a neural network with the
Stream
context manager to do real time inferences. When I do so, the process memory rapidly increases over time. At the beginning I though the network was the culprit. However, I did a script to do the exact same thing but takingsounddevice
out of the equation and in such scenario the memory does not increases even after thousands of inferences.Here is the model inferences outside
sounddevice
:And here is where I included it and it produces a memory leak. Whenever I run the process I am monitoring it with the
top
command:If it helps, here is some further information of where I am testing it:
Here you can obtain the model (it is an untrained copy of it, therefore it will produce a distorted version of the input only, which is fine) model.pt.zip
Thanks!