At the moment the Hessian is computed as:
H = J.transpose() * J; (1)
this can be optimized using:
_H.selfadjointView<Eigen::Lower>().rankUpdate(_J.transpose()); (2)
however, we must pay attention to sparsity. Basically, the solver needs to be initialized with the sparsity from (1) but then in the solve we need to compute the Hessian as in (2), otherwise errors rise and it only works when the Hessian is diagonal.
At the moment the Hessian is computed as:
H = J.transpose() * J;
(1) this can be optimized using:_H.selfadjointView<Eigen::Lower>().rankUpdate(_J.transpose());
(2) however, we must pay attention to sparsity. Basically, the solver needs to be initialized with the sparsity from (1) but then in thesolve
we need to compute the Hessian as in (2), otherwise errors rise and it only works when the Hessian is diagonal.