JuliaGPU / CUDA.jl

CUDA programming in Julia.
https://juliagpu.org/cuda/
Other
1.16k stars 206 forks source link

CUFFT plans seem to leak GPU memory #2400

Open david-macmahon opened 1 month ago

david-macmahon commented 1 month ago

The memory associated with CUFFT plans is not always reclaimed. This was a big problem for CUDA v5.3.4 because plan memory was not consistently reclaimed/reused. On master (as of a2a9b13) the situation is much improved, but there still seems to be a leak of one plan's worth of memory.

Here is the behavior I'm seeing when using master (a2a9b13):

julia> using CUDA, CUDA.CUFFT

julia> CUDA.memory_status()
Effective GPU memory usage: 1.02% (163.438 MiB/15.724 GiB)
Memory pool usage: 0 bytes (0 bytes reserved)

julia> x=CUDA.zeros(2^28); CUDA.memory_status()
Effective GPU memory usage: 7.37% (1.160 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=plan_rfft(x); CUDA.memory_status()
Effective GPU memory usage: 13.81% (2.171 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=nothing; GC.gc(); CUDA.memory_status(); CUDA.reclaim(); CUDA.memory_status()
Effective GPU memory usage: 13.81% (2.171 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)
Effective GPU memory usage: 13.81% (2.171 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=plan_rfft(x); CUDA.memory_status()
Effective GPU memory usage: 20.18% (3.173 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=nothing; GC.gc(); CUDA.memory_status(); CUDA.reclaim(); CUDA.memory_status()
Effective GPU memory usage: 20.18% (3.173 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)
Effective GPU memory usage: 13.81% (2.171 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=plan_rfft(x); CUDA.memory_status()
Effective GPU memory usage: 20.18% (3.173 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

julia> p=nothing; GC.gc(); CUDA.memory_status(); CUDA.reclaim(); CUDA.memory_status()
Effective GPU memory usage: 20.18% (3.173 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)
Effective GPU memory usage: 13.81% (2.171 GiB/15.724 GiB)
Memory pool usage: 1024.000 MiB (1024.000 MiB reserved)

The data array uses 1 GiB of GPU memory. The first plan uses 1 GiB of memory, but it is not reclaimed after the plan is (presumably) GC'd. The second plan does not reuse the first plan's memory, so GU memory usage goes up to 3 GiB, but this is reclaimed when the second plan is GC'd. The third plan behave the same as the second plan.

Curiously, p.handle goes back and forth between 1 and 2 with each plan creation. I'm not sure if that's relevant, but I think the desired bevavior might be to keep re-using handle 1?