Closed gdalle closed 5 months ago
here is how I solve it in my tailored ForwardDiff compatible calls
partials = mapreduce(ℱ.partials, hcat, BCX)'
b = ℱ.jacobian(x -> solve_sylvester_equation_condition(x, val), bcx)
a = ℱ.jacobian(x -> solve_sylvester_equation_condition(bcx, x), val)
reshape_matmul = LinearOperators.LinearOperator(Float64, size(b,1) * size(partials,2), size(b,1) * size(partials,2), false, false,
(sol,𝐱) -> begin
𝐗 = reshape(𝐱, (size(b,1),size(partials,2)))
sol .= vec(a * 𝐗)
return sol
end)
X, info = Krylov.gmres(reshape_matmul, -vec(b * partials))#, atol = tol)
jvp = reshape(X, (size(b,1),size(partials,2)))
here you can see nicely as well why its worth having different conditions_backends. if you go from m input elements to n output elements and m >>n you want to use reverse mode overall but the conditions go both ways in terms of dimensionality. once you go from m to n and once from n to m (solution as input). in that sense, the case with the potentially largest efficiency gain is the one where you diff the condition with respect to the solution (b
). and if m >> n or vice versa it can make sense to use autodiff the other way compared to what you use on the implicit function. I hope this was clear :)
I'm struggling to make this work because gmres(A, B)
doesn't work to perform parallel solve when B
is a matrix. On the other hand, A \ B
works fine
See the vec and reshape operations in my example.
I also gave it a try but was struggling with the operators. Working with realised matrices is somewhat easier for me...
I tried a bit myself and managed for the direct solver case but the linear operator dimensions are a bit tricky to figure out
This is an internal performance issue that can always be improved in a non breaking way, so I'm putting it on hold to focus on the design for v0.5
Fixed by #135 for the dense Jacobian case, see #129 for the lazy Jacobian case
See https://github.com/gdalle/ImplicitDifferentiation.jl/issues/71#issuecomment-1662287726 by @thorek1