Open david-hofmann opened 1 year ago
After a certain point it becomes more interesting to compute the sparse Jacobian J
once and for all instead of iterating matrix-vector products with a lazy operator. The turning point will depend on the number of columns in the matrix V
That's a non-solution since there's also a lot of situations where you need the operator like in JFNK
For the noobs among us, what is JFNK?
Jacobian-free Newton Krylov
Is there a chance to expand the code base to support matrix multiplications with the Jacbian? Here is an MWE that shows what I am trying to accomplish based on the example from the documentation
The current solution is to loop over all column vectors which makes this solution slower than computing the Jacobian than with for instance ForwardDiff.jl and then multiply instead of this Jacobian free solution (if the number of dimensions is small enough as in the example). If anyone has a faster solution based on the existing code-base I'd be very grateful.