Open avik-pal opened 2 months ago
Commenting so I keep track of this. If you think something deserves to be in DI, let me know! cc @adrhill
@gdalle what do you think about the last part in https://discourse.julialang.org/t/ann-lux-jl-explicitly-parameterized-neural-networks-in-julia/81689/65?u=avikpal?
If that exists in DI, I can just unwrap StatefulLuxLayer into that DI struct and forward the call
Seems like overloading the calls for custom functions won't really work because of ambiguity issues:
Can't we just specialize on Lux.StatefulLuxLayer
and pass the arguments to Lux.vector_jacobian_product
for DifferentiationInterface.pullback
?
I was trying that for the gradient calls but DI specializes on the extras type which means we will also have to specialize on each extras for all backends
To support second order for Enzyme, I introduced DifferentiationInterface.nested(::AbstractADType)
in https://github.com/gdalle/DifferentiationInterface.jl/pull/285. The idea is that it returns a possibly different version of the backend object, which is aware that it is being differentiated. At the moment it doesn't do anything, except for AutoEnzyme
which is turned into a homemade AutoDeferredEnzyme
.
Would this be useful functionality for Lux.jl and friends? Should I make it public / work on it some more?
One could imagine an extension where nested
tells the inner backend what outer backend is trying to differentiate through it.
If I understand correctly, Lux
handles nested AD implicitly by replacing the calls (#598) and explicitly with vector_jacobian_product
and jacobian_vector_product
.
@gdalle Can DifferentiationInterface.nested
resolve the need for them? (assuming that everyone only use DI
, not APIs of each package)
I'm not sure, cause there are several things one might want to do with nested backends, and depending on the situation this lux replacement trick may not always be appropriate?
Just putting it out there in case Avik is inspired. Essentially, modifying the backend is the cleanest approach I could think of for this type of problem
To clarify how nested AD works in Lux: It doesn't simply switch the backends, i.e. we don't take a Zygote.gradient(Zygote.gradient(...)...)
call and make it ForwardDiff.gradient(Zygote.gradient(...)...)
, you could in principle do that but you shouldn't (doing that would be computationally terrible). Instead, it changes the operations to a JVP
over a gradient
. Now, just extend that to Jacobians, JVPs, VJPs, etc.
The only case where replacement is not ideal is ForwardDiff.gradient(ForwardDiff.gradient(...))
where the problem size is extremely small, but we don't replace that anyway.
All the other forms of Zygote over ForwardDiff or Zygote over Zygote (or any reverse mode over X-mode) have no computational benefit and will error in most cases, so it does make sense to switch.
Even doing an Enzyme.Reverse
over Enzyme.Reverse
will be a bad idea just because of the overhead of reverse mode[^1]. Basically, for 2nd order (not general nested higher-order AD), it is almost certainly beneficial to switch the operations.
[^1]: Okay, it might be faster if the reverse mode is calling into one of the vendor-specific codes and the forward mode isn't, but that is mostly because we got lazy.
Oh right, my nested
trick works because I needed to change the behavior of the inner backend, but here you change the behavior of the outer backend when a gradient is already happening inside. I honestly don't know if there is a nice way to integrate this in DI, especially because we don't handle multiple parameters atm.
We capture:
after https://github.com/LuxDL/Lux.jl/pull/598. We should capture the DI jacobian, gradient, and, most importantly
pullback
calls to augment them with the faster versions.An important question here is where we should switch all calls or only calls with
SecondOrder
. I prefer the former, where we can just use forwarddiff to do the AD. Maybe forSecondOrder
we respect the user choice.