maweigert / gputools

GPU accelerated image/volume processing in Python
BSD 3-Clause "New" or "Revised" License
108 stars 20 forks source link

How to release GPU memory after computing? #26

Open sunshichen opened 3 years ago

sunshichen commented 3 years ago

I'm using gputools to do ndarray smoothing. After finished processing, I found the GPU still hold hundreds of MB memory. How can I release that part of memory appropriately?

Sample codes:

from gputools.convolve import median_filter
import numpy as np

array = np.random.randint(0, 2, (128, 128, 128))
smoothed_array = mediean_filter(array, size=5)
parthnatekar commented 2 years ago

Hi, I'm having the same issue. Can you please comment on how to release memory so that I can process batches serially?

maweigert commented 2 years ago

Hi,

Sorry not having replied to this issue.

from gputools.convolve import median_filter
import numpy as np

array = np.random.randint(0, 2, (128, 128, 128))
smoothed_array = median_filter(array, size=5)

I can't reproduce this. After median_filter runs, all extra memory on the GPU is properly released and overal GPU memory consumption is as before.

Hi, I'm having the same issue. Can you please comment on how to release memory so that I can process batches serially?

pyopencl will normally release the GPU memory associated to any intermediate array at the time it leaves its scope.

E.g. when allocating an array:

from gputools import OCLArray
x = OCLArray.zeros((1024,)*3)

you should see GPU memory going up, but then when deleting the array

del x

it should go back.

Hope that helps,

Martin

parthnatekar commented 2 years ago

Hi,

Thanks for your response.

I'm doing alternating deskew and deconvolution operations; the deskew uses gpu-tools and the deconvolution uses a Tensorflow backend.

It seems that the issue arises when gpu-tools runs after a Tensorflow iteration, even when I manually clear the Tensorflow graph.

Any thoughts?