Open jwallwork23 opened 1 year ago
Could we do the same in hessian-based fixed point iteration, where we'd skip the last $n$ converged subintervals? And then do a final solve on the final adapted meshes for the output of MeshSeq.fixed_point_iteration()
Excellent point! Yeah that’d be a nice optimisation too.
Easily achieved with yield
(see #145)
How would space-time normalisation work in this case? Would it make sense to still recompute the metric at each latter iteration for these converged subintervals and then space-time normalise, but we still don't adapt the converged subintervals?
How would space-time normalisation work in this case? Would it make sense to still recompute the metric at each latter iteration for these converged subintervals and then space-time normalise, but we still don't adapt the converged subintervals?
An initial thing to try might be to keep track of the complexities of the metrics that were used for the subintervals that get fixed, subtract those from the overall target complexity available for the remaining ones, and do space-time normalisation over the reduced set of subintervals.
One problem that might arise is if significant resolution was used for the dropped out subintervals and it turns out more DoFs would be required for the others than remain available.
Suppose we are doing a goal-oriented fixed point iteration. Following pyroteus/pyroteus#154, we are able to skip the metric construction and mesh adaptation step on any subinterval for which the element count has converged, for example. However, the major cost of goal-oriented methods is typically the solution of forward and adjoint problems to get the ingredients for the error indicators, especially if enrichment is used.
If convergence has been reached on the first $n$ subintervals then we can get away without solving the adjoint equation on them, i.e., the adjoint problem is solved backwards in time to subinterval $n+1$.