Open kif opened 1 year ago
Can something like valgrind maybe provide details on where those allocations are taking place?
Here are the valgrind
"massif" profile for two calls of the program when running on a limited number of images (2000), with and without profiling activated. Valgrind still suggests to look at h5py
rather than pyopencl
but the triggering of the option makes 16G difference in memory consumption.
Without profiling:
With profiling:
I run it several other times and it looks like the profiling in OpenCL prevents the memory from being freed.
So I tired to collect only timestamps for each event instead of the complete process. The patch is for now implemented in: https://github.com/silx-kit/silx/pull/3690
The memory profile looks like this, now. One would have expected 10 memory free (since 10 files are processed) but fewer are visible.
I got struck by something similar in another project ... but profiling was not involved this time.
https://github.com/kif/multianalyzer/blob/main/multianalyzer/opencl.py
The pattern was similar: read data from an HDF5 file with large chunks and send them to the GPU ...
But once again, unable to reproduce the behaviour within a self contained script.
Calling the pyopencl.array method finalize
helps in freeing the memory on the CPU
Describe the bug Very large (host) memory consumption has been observed when running OpenCL application in profiling mode. Example: Processing 10000 4Mpix images (int32) with ~6 kernels per image on a nvidia Tesla A40 gets (OOM-) killed on a computer with 200GB of memory. The computer could host all images, uncompressed, in memory.
I used the
tracemalloc
tool from Python on the application without noticeable leak (at the Python level) indicating that the leak was from malloc performed outside the scope of Python. I investigated a possible leak coming fromHDF5
via theh5py
since all data were read and written in this format. but this was not the case.When profiling is disabled, the memory consumption does not exceed few percent of the total memory.
To Reproduce Investigated in: https://github.com/silx-kit/pyFAI/pull/1744
Expected behavior A memory leak is expected from keeping the list of all events, but should not exceed 3.4 MB for 60000 kernels (when stored as 2-namedtuple)
Environment (please complete the following information):
Additional context The list of event is handled at https://github.com/silx-kit/silx/blob/master/src/silx/opencl/processing.py#L288