Open gdalle opened 1 month ago
sure, PR welcome!
Sure! I'll try to handle this case correctly in DI first, because it still errors at the moment. Once I have a handle on the single-argument solution, I'll try to tamper with the generated function to do the same for multiple arguments.
bump @gdalle
gentle ping @gdalle
Essentially this comes down to adding the option for Active
inputs here:
The variable interpolated as $arg
is a boolean defined as follows, which is a bit obscure to me, care to shed some light?
Hi! As you know, @ExpandingMan and I are looking to optimize performance for StaticArrays. Forward mode works splendidly, but reverse mode still makes one allocation during the
gradient
call:I found it surprising because Enzyme guesses the right activity for
SVector
:The allocation happens on the following line: https://github.com/EnzymeAD/Enzyme.jl/blob/42ecd12cf5076f8d3db1694e014f69bc0b99173f/src/Enzyme.jl#L1708 From what I understand, the generated function
Enzyme.gradient
puts aRef
there to treat every argument as(Mixed)Duplicated
. This means that all gradient results are stored in the passed arguments: https://github.com/EnzymeAD/Enzyme.jl/blob/42ecd12cf5076f8d3db1694e014f69bc0b99173f/src/Enzyme.jl#L1741 Otherwise, you would have to recover some gradients from the result and others from the arguments, which is understandably tricky. Do you think there is an easy fix in Enzyme? Otherwise, since DI only has one differentiated argument, I assume it will be rather straightfoward to callEnzyme.autodiff
directly insideDI.gradient
and recover allocation-free behavior.Related: