Open GVigne opened 1 year ago
CC @haampie who might have some ideas on the second part. It's a common operation in DFT codes so probably QE has had this issue. @GVigne did you check that it dispatched to a special-purpose kernel (ie no generic fallback)?
The comment suggests the boundscheck is expensive, can you try putting @inbounds
in front?
I tried that, but it didn't change anything.
cc: @tkf @maleadt
One thing I would first do is to use NSight systems to profile your application https://cuda.juliagpu.org/stable/development/profiling/#NVIDIA-Nsight-Systems
When I used the @time macro on the line doing the tolerance check, I noticed something rather odd:
You likely need CUDA.@sync
see https://cuda.juliagpu.org/stable/development/profiling/#Time-measurements
I tried that, but it didn't change anything.
Did you put the @inbounds
around the @view
? The bounds checks it does are ridiculously expensive, https://github.com/JuliaGPU/CUDA.jl/issues/1678, and should probably be changed.
I tried that, but it didn't change anything.
Did you put the
@inbounds
around the@view
? The bounds checks it does are ridiculously expensive, JuliaGPU/CUDA.jl#1678, and should probably be changed.
Yeah, sorry, I wasn't being very explicit. I tried to put the @inbounds
around the view
, by doing:
f_fourier .= @inbounds view(f_real, kpt.mapping)
This doesn't seem to change the timings much though, as most of the time gets spent doing this allocation. I also tried to manually comment out the code in CUDA.jl to make sure the bounds really were checked, but this didn't change anything either. I must admit I don't really know what's going on...
f_fourier .= @inbounds view(f_real, kpt.mapping)
No actual work is being done by the view
, so it's possible this doesn't do anything. Did you try @inbounds f_fourier .=
? (or @inbounds begin ... end
if you want to be sure)
Yes, I used both but that didn't change the timers. I'm currently looking into the source code of TimerOutputs, because I'm starting to doubt the time it's measuring...
Ah yes sorry, the bounds check is indeed done within the view:
julia> x = [1;2]; view(x, 3:4); nothing;
ERROR: BoundsError: attempt to access 2-element Vector{Int64} at index [3:4]
I've noted some performance issues when doing GPU computations with DFTK. I'm opening this issue to gather ideas, because so far I haven't managed to solve them (perhaps @vchuravy can help?).
I launched the SCF solver for a model with a supercell of parameters
(4,5,2)
. I am usingFloat32
. On my computer, this takes about 22 seconds, and I noticed the following in the timer for the LOBPCG routine:The current implementation for
drop!
is actually quite bad for GPUs:This launches a kernel for each column of
X
, so it's not a surprise a lot of time is spent in this loop.drop!
happens very often (more than 130 times in this case), however, having to actually drop a column by randomising it is rather uncommon. X is a very "skinny" matrix: in this case, it's size is around (84751,194).My idea was to vectorize the norm computation: however, doing this also forces us to bring the norms array back on the CPU, and we barely win any computational time. Instead, I would like to also vectorize the tolerance check (using
any
ormapreduce
), and only do the GPU -> CPU transfer if needed. My actual code looks like thisSurprisingly, this doesn't reduce computational time. The norms get computed much faster, which is expected, but the tolerance check still takes a lot of time. When I used the
@time
macro on the line doing the tolerance check, I noticed something rather odd:Since
X_norms
is a relatively small vector (around 200 elements), I expected it to always run in around 25 µs, not in 12ms. Initially, I thought the anonymous function would get recompiled every time, but it doesn't explain why some calls are much faster than others. Moving the anonymous function out and giving it a name doesn't change anything. Why is there such a difference between calls? Is this a synchronisation issue?The other performance issue I noticed is in the
fft!
function. Most of the time is spent doing the following:f_fourier .= view(f_real, kpt.mapping)
Each call is actually rather fast (about 800 µs), but since we call the fft thousands of times, it adds up quickly: for this example, doing ffts for the local and kinetic terms takes about 4 seconds (16% of computational time), and these 4 seconds are almost exclusively spent doing the operation involvingview
above. I think this is expected, as doing this view more or less comes to scalar indexing (when only want to pick a few elements, based on their position in an array), so there is probably nothing much we can do on the GPU side. However, if someone has an idea to get rid of the view, it would be great.