Open pmichel31415 opened 7 years ago
I've got the similar problem for running my parser several times in a system using dynet. If you are really urgent to run your system properly, you can run it step by step by using "os.system(command)" like me.
for a_eval in data: os.system(command) --> #command must have load and save model. e.g.) if has_prev_model: load model
In my case, a saved model is almost 1.5G so I have to delete model in the memory before running a next job(need to load another model) but it was impossible. I have spent a lot of time to solve it temporally because I have thought the memory problem seems to be my system fault.
Thank you share it :+1:
Did you verify it is indeed a python issue and does not happen in cpp?
By looking at the cpp code, I think this is because there are no destructors for ParameterStorage
and LookupParametersStorage
.
Yes I confirmed this is a c++ problem
So I took a look at this, and it's basically a result of the DyNet memory allocator not having any concept of garbage collection. When a parameter is allocated the amount of memory the memory allocator is using is incremented, but there's no way to free memory (other than freeing all the memory, which would kill all of the parameters that we have loaded).
I thought about this a bit, and I think it might be useful to generalize the DyNet memory management so that we can support different types of allocators:
(other than freeing all the memory, which would kill all of the parameters that we have loaded).
Is this done by calling cleanup() declared in init.h?
I have a process that needs to repeatedly create and delete independent dynet models. In this case, would I call initialize() and cleanup() repeatedly?
When running this code :
With an old version (without dynamic memory pool), I get the following error
There are two big problems here :
Is there a way to limit the memory (ideally an option like
--dynet-mem [initial mem]-[maximum-mem]
) ?