Open itk22 opened 8 months ago
Hey there! Thanks for the issue.
Would you be able to condense your code down to a single MWE? (Preferably around 20 lines of code.) For example we probably don't need the details of your training loop, the fact that it's batched, etc. Moreover I'm afraid this code won't run -- if nothing else, it currently doesn't have any import statements.
Hi @patrick-kidger,
I updated the original post with a condensed MWE.
In the above code, when I use optx.RecursiveCheckpointAdjoint()
, I am able to recover the correct gradients. However, when I use optx.ImplicitAdjoint
with a solver specified as CG, the gradients are all exactly zero. To be fair, I was not expecting this to work out of the box because 1. adapt_fn
does not find the exact solution to the inner optimization problem, 2. even for a small network, this seems to be a rather difficult calculation. However, the jax-opt example I shared above indicates that the gradients can be calculated correctly in a similar scenario using CG:
Because of this, I started wondering if there is a fundamental difference in how implicit adjoints are calculated in the two packages. My instinct is that the mismatch might have to do with handling of higher-order terms but I am curious to hear your opinion and whether it is something that can be quickly patched.
So in a minimisation problem, the solution stays constant as you peturb the initial parameters. Regardless of where you start, you should expect to converge to the same solution! So in fact a zero gradient is what is expected. (Imagine finding argmin_x x^2
. It doesn't matter whether you start at x=1
or x=1.1
; either way your output will be x=0
.)
The fact that you get a nonzero gradient via RecursiveCheckpointAdjoint
will be because of the fact that you are taking so few steps that you are not actually converging to the minima at all. (In the above example, you might only converge as far as x=0.5
or x=0.6
.) So I think for a meta-learning use-case, then probably RecursiveCheckpointAdjoint
is actually the correct thing to be doing!
The fact that JAXopt appears to do otherwise is possibly a bug in JAXopt. (?)
That aside, some comments on your implementation:
CG
solver if you can help it. This is a fairly numerically inaccurate / unstable solver.sol.aux
. In its current meaning this is the aux
from the final step of the solve. This means gradients through this are actually not defined mathematically, since this is an internal detail of the solver and unrelated to the implicit function theorem! So (a) don't try to use this in gradient calculations when using ImplicitAdjoint
, but also (b) I can still totally see that this is a footgun without guardrails, and you've prompted us to think whether there's a way to adjust this into something better.Hi Patrick,
Thank you for your thorough response. It is true that the solution to a minimisation problem is independent of the initial parameters and should lead to zero gradients. As you noted, the gradients from RecursiveCheckpointAdjoint
are non-zero in the above MWE because we take very few optimisation steps. I set that number low to emulate a typical bi-level meta-learning setup where, in the inner loops, we do not fully optimise the model parameters for each task but rather take just a few steps of optimisation. This is because the goal is not to find the true optimum for any single task but rather to optimise the initialisation such that the model can quickly adapt to related tasks. In this case, the gradients through the inner loop, must be non-zero for the initialisation to evolve across outer loop iterations. Also, iMAML has a special regularising term to ensure meta-gradients are not non-zero for larger number of inner steps.
So, there is no bug in JAXopt, and it only appeared so because of my incomplete explanation. The iMAML paper that I am following uses CG because it avoids forming a Hessian matrix. It also seems that CG-like iterative solvers are used quite extensively within JAXopt - as far as I understand this again has to do with their matrix-free nature. Optimistix, on the other hand seems to be using direct solvers as a default.
Hi @patrick-kidger and @packquickly,
I was trying to implement the following meta-learning example from jax-opt in optimistix: Few-shot Adaptation with Model Agnostic Meta-Learning . However, I ran into an issue with implicit differentiation through the inner loop. The below example runs well when using
optx.RecursiveCheckpointAdjoint
but when I try to recreate the iMAML setup by puttingoptx.ImplicitAdjoint
with a CG solver with 20 steps, all the meta-gradients are zero, and the meta-optimiser doesn't change at all in the training. Could you please help me identify the issue with the code? It seems to be an implementation detail for implicit adjoints that differs between jax-opt and optimistic.Here is an MWE: