Closed icoderaven closed 4 years ago
Hi icoderaven,
I just implemented a version of this in commits https://github.com/NVIDIA/gvdb-voxels/commit/dc4d3a0f26bd098850c20ac98d1aca52d8618c4a and https://github.com/NVIDIA/gvdb-voxels/commit/f0d05bb67c902ff74d300e7d4e6eccc1579e59a2 - this adds destructors to most of the objects in GVDB so that everything should get destroyed correctly (in particular, making it so that while (1) { nvdb::VolumeGVDB gvdb; gvdb.SetCudaDevice(GVDB_DEV_CURRENT); gvdb.Initialize(); }
doesn't leak memory. I think cuda-memcheck --leak-check full
should run cleanly now with the exception of an error due to CUDA/OpenGL interop - let me know if this breaks anything (since this destroys things more thoroughly and changes the API semantics for setting cameras and lights slightly, I wouldn't be surprised)!
Closing pull request, since I think this has been implemented in the previous two commits listed, and more than a month has passed. Please let me know if this is still an issue! (This should be the last thing closed for today.)
Makes sense! Sorry, just defended my thesis a couple of days back and as a result didn't want to touch the GVDB backend. Will merge latest commits (and get back to discussion on other commits) and test sometime soon!
No worries - congratulations!
GVDB doesn't clean up allocated GPU memory after itself. This becomes an issue when using multiple GVDB contexts. These are all the allocations I had to track down to get no more memory leaks in my application when run via
cuda-memcheck --leak-check full