Closed bmcdanie closed 4 months ago
@bmcdanie Is this the function this issue's referring to? https://github.com/project-asgard/asgard/blob/develop/src/time_advance.cpp#L162
we have support for time-steeping with implicit and implicit-explicit methods using gmres and bicg solvers
much of our implementation work to this point has focused on explicit time advance (ET). however, we also want to support implicit timestepping (IT).
ET is driven by a large matrix vector product. we have leveraged our problem structure (tensor encoding) to avoid forming the large matrix, and instead perform the required arithmetic via many small matrix-matrix multiplications. currently, ET is GPU accelerated and the work can be distributed across multiple GPUs/nodes.
IT is driven by a matrix solve. we have a simple, un-optimized method for IT - simply building the system matrix and calling LAPACK to solve. however, we would like to extend our implementation to run on GPUs/many nodes to solve larger problems.
there are multiple possible approaches here. we have not explored (distributed) sparse direct solves - we're not sure how sparse our matrices are/how many storage they consume, what's available, performance, etc. we have looked some into matrix free methods (iterative solve), and thing this may be the best avenue. these routines can work without explicitly forming the system matrix as long as a matrix vector product is available (we have this for ET already). however, we have not found any library that has distributed, GPU-accelerated matrix free iterative solve. some iterative algorithms (e.g., GMRES) are fairly simple to implement, and we would get distribution/acceleration "out of the box" from our existing ET functions, but rolling our own might require some means of preconditioning the matrices.