JuliaDiff / ChainRules.jl

forward and reverse mode automatic differentiation primitives for Julia Base + StdLibs
Other
436 stars 89 forks source link

Chainrule for CUDA reduction #666

Open renatobellotti opened 2 years ago

renatobellotti commented 2 years ago

Hi,

I'd like to suggest including a rule for GPU reductions.

using Zygote

function my_loss(v)
    # This works:
    # l = sum(v)
    # This does not work:
    l = reduce(+, v)
    return l
end

v = cu([1., 2.])
Zygote.gradient(my_loss, v)

See also: https://github.com/FluxML/Zygote.jl/issues/730#issuecomment-1221146525

mcabbott commented 2 years ago

rrule(reduce, +, x; kw...) can just call rrule(sum, x; kw...) right?

renatobellotti commented 2 years ago

Isn't the reduction implemented on the GPU? I don't know the details, but reducing on the GPU and then copying the result is certainly more efficient than copying the entire vector and reducing on the CPU.

mcabbott commented 2 years ago

Sure. The rrule for sum just calls sum again on what it's given, for the forward pass, and thus uses the same GPU code as without AD. (And the reverse pass is written using broadcasting, which also works on the GPU.)

renatobellotti commented 2 years ago

Nice!