Open liu946 opened 5 years ago
Thanks for noting this! It will be hard for us to debug this ourself, but maybe you could run a memory profiler such as valgrind to identify memory leaks? If you can find the place that seems to be causing the leak we can try to fix it.
I have at least one hint: When compiled with MKL only then the memory usage is constant. But when compiled additionally with CUDA and CUDNN then there is severe memory leakage.
Do you use CUDNN?
We used Dynet to build a semantic role labeling model consisting of multiple LSTMs. During the long-term deployment and prediction of the model deployment online, the memory continues to increase and maybe there is a memory leak somewhere in dynet.
The code calling dynet is at https://github.com/HIT-SCIR/ltp/tree/master/src/srl. (the exact location is unknown. We have checked our code and all 'new's have been handled well.)
Our dynet version is a copy at https://github.com/HIT-SCIR/ltp/tree/master/thirdparty/dynet two years ago.We have try to use dynamic and static memory check tools but found nothing.
Is there anyone else who ever experienced the same problem? Is there any fix about some bugs like this, and whether this will be solved if we just update our dynet copy version.
Looking forward to your reply. Thank you.