Open diolatzis opened 4 years ago
Hi Stavros -- does the leak still happen when you add a enoki.cuda_malloc_trim()
call to the loop?
Hi Wenzel, adding this make the memory usage fluctuate much more but on average it is still increasing (albeit more slowly).
Same...I use gpu_rgb
too. Following is my rendering loop:
for asset in asset_list:
for envmap in envmap_list:
for envrot in range(0,360,10):
ek.cuda_malloc_trim()
scene = load_string(build_scene(...))
scene.integrator().render(scene, scene.sensors()[0])
film = scene.sensors()[0].film()
film.set_destination_file(...)
film.develop()
It gives cuda_check(): runtime API error = 0002 "cudaErrorMemoryAllocation" in ../ext/enoki/src/cuda/horiz.cu:59.
Hey thanks for the great work on Mitsuba 2 so far!
Summary
I have an issue of GPU memory leak when rendering using the "gpu_rgb" variant of mitsuba 2.
System configuration
Windows 10 Visual Studio 2019 Python 3.7.9 gpu_rgb variant
Description
When rendering a scene (the cbox from the documentation) through python in a loop for dataset generation the GPU used memory is slowly increasing. Am I missing something or is there some GPU memory leak?
Steps to reproduce
To test this I changed the render_scene.py example script to render in a loop and the GPU memory usage is slowly increasing.