Closed wongwill86 closed 8 years ago
pushed a temporal fix using deep copy to make sure that the pointer was managed by numpy. This solution was suggested by TallJimbo. https://github.com/ndarray/Boost.NumPy/issues/54
The commit: https://github.com/seung-lab/znn-release/commit/698a2f2e8b68d1794eb72eb75c8e889d844c20c0
This fix have two copying overhead, which could be avoided by allocating a heap manually and do not delete it in C++. Then pass the pointer to numpy. The memory leak should be checked.
Currently, we are using deep copy to handle this issue.
Description
Output data is not guaranteed to be correct due to the data being deallocated by the time it is accessed in python.
Example code
https://github.com/seung-lab/znn-release/blob/master/python/pyznn_utils.hpp#L421-L435
qube_p<T> tqp = get_qube<T>( vec4i(sc,sz,sy,sx) );
creates a smart pointer (std::shared_ptr
) to the newly allocated memory on the heap. This location for memory is the destination for copying from the input cubelistclist
.tqp->data(),
tqp
is destroyed, thus deallocating the memory that we copied the data to.Talked to @jingpengwu and @torms3 as well reading jingpeng+Boost.NumPy discussion
why haven't seen any issues?
@torms3 suggests that it is because we have been "lucky". Mainly because we are using a custom memory pool manager that will not destroy the data soon after deallocating. But it looks like it will be a bigger problem in the future when we add more outputs
how to fix?