brendanjmeade / celeri

Next generation earthquake cycle kinematics
BSD 3-Clause "New" or "Revised" License
24 stars 6 forks source link

H-matrices! #73

Closed tbenthompson closed 2 years ago

tbenthompson commented 2 years ago

Just to follow up on the ideas we talked about:

  1. The throw-it-away H-matrix. a. The H-matrix construction and matrix vector product. (Done, PR soon!) b. Getting the iterative linear solver to work with a sparse matrix. c. Combining the iterative solver and the H-matrix to solve a real celeri problem.
  2. H2 matrices: this is a further approximation step that will probably reduce memory by another factor of 2-3x. I don't think this is super complicated but I haven't investigated deeply.
  3. ACA H-matrix: Construct the H-matrix approximation without constructing the full original matrix subblocks. The main benefit is faster and lower memory H-matrix construction.
  4. Directly inverting H-matrices. The main benefit is faster solves. This would be super helpful for two goals: time-dependent inversion and integrating celeri with boundary element methods.

The only critical step for now is #1a-c, but I wanted to include the other three as a sketch of the opportunities that are available.

tbenthompson commented 2 years ago

@brendanjmeade @jploveless

brendanjmeade commented 2 years ago

@tbenthompson 1.a. - 1.c would be amazing!

tbenthompson commented 2 years ago

Just a quick note: I think the planar projection is causing the H-matrix approximation to perform slightly worse than I would've expected. This is because the triangular elements aren't in exactly the same place each time. This isn't a problem. But, I just wanted to record this thought somewhere and this seemed like the logical place.

tbenthompson commented 2 years ago

The difficult parts of 1a-1c are done. The remaining bits are easier but still necessary:

I'm going to wait on these next steps until Brendan (and Jack?) have had a chance to look through and comment on the current state.

brendanjmeade commented 2 years ago

The difficult parts of 1a-1c are done. The remaining bits are easier but still necessary:

wrapping it up nicely to be used automatically. computing the column norm preconditioner without access to the full original matrix.

Is this currently done on the lines?

col_norms = np.linalg.norm(Xp, axis=0)
XpP = Xp / col_norms[None, :]

I think that we'll have the TDE -> velocity matrix and smoothing matrices at least at some point. Can we calculate col_norms for each of these when they exist (ephemerally) and would that be enough?

building the H-matrix and the sparse solver for multiple meshes. The simple solution here involves running a for loop, but it actually might be helpful to combine all the meshes into a single mesh for the sake of efficiently constructing an H-matrix tree.

@jploveless Do you think we can convert the "blocks"-style input files to "celeri"-style input files for the 3 subduction zone Japan model? That would be a good 3 mesh model that we care about running soon. This is a big test because the single mesh Cascadia example is an oddball because it just has one mesh. Many meshes will be the norm going forward.

I can start working on the full global block model conversion but we'll probably need one of the old meade02/03 computers to actually store all the matrices for this. Not sure but I can at least get the files formatted correctly for celeri.

Identifying a cutoff distance beyond which we don't trust the TDE calculations? At a very minimum, stations in Tibet can ignore TDEs in the Caribbean.

I agree. I'll add a field to the *_command.json file that specifies this distance, and that I'll work on propagating it through to the Okada and TDE calculations.

I'm going to wait on these next steps until Brendan (and Jack?) have had a chance to look through and comment on the current state.

I'll be doing this next week. I can confirm that it hmat_dev.ipynb runs fantastically for me!

tbenthompson commented 2 years ago

I think that we'll have the TDE -> velocity matrix and smoothing matrices at least at some point. Can we calculate col_norms for each of these when they exist (ephemerally) and would that be enough?

Yes, that should work. I think we'll have to compute the columns one chunk at a time for each mesh since we only plan to store one mesh at a time of the full original matrix.

jploveless commented 2 years ago

@jploveless Do you think we can convert the "blocks"-style input files to "celeri"-style input files for the 3 subduction zone Japan model? That would be a good 3 mesh model that we care about running soon. This is a big test because the single mesh Cascadia example is an oddball because it just has one mesh. Many meshes will be the norm going forward.

I added Japan input files in https://github.com/brendanjmeade/celeri/commit/942b5de499f247f0ed87cda888f544b4a8af3296 as well as a quick conversion utility for meshes that are only in .mat format. Gotta run to meetings but I hope to check these files running through celeri sometime this week.

brendanjmeade commented 2 years ago

@jploveless Thanks for this! I've done a bit more here. The command file in this folder now points to the correct files, and there's a mesh_parameters.json that points to the three meshes. The calculations run somewhat. The partials get calculated (not sure they're correct) but there's an indexing error in the construction of the weighting vector. I'm betting this is because it's not handling multiple meshes correctly.

There's now a dedicated notebook for Japan: celeri_direct_dense_japan.ipynb which was created with commit: https://github.com/brendanjmeade/celeri/commit/c94db2dddda8f526db5b926d62d7a42592dc25f0

This is a fantastic set of files to really test celeri on and I'm really excited to get the classic Japan model working here!

jploveless commented 2 years ago

I'll play around with this now. It does seem like some indexing issues in moving to multiple meshes. Thanks for the cleanup of the files!

On Wed, Feb 9, 2022 at 12:18 PM Brendan Meade @.***> wrote:

@jploveless https://github.com/jploveless Thanks for this! I've done a bit more here. The command file in this folder now points to the correct files, and there's a mesh_parameters.json that points to the three meshes. The calculations run somewhat. The partials get calculated (not sure they're correct) but there's an indexing error in the construction of the weighting vector. I'm betting this is because it's not handling multiple meshes correctly.

There's now a dedicated notebook for Japan: celeri_direct_dense_japan.ipynb which was created with commit: c94db2d https://github.com/brendanjmeade/celeri/commit/c94db2dddda8f526db5b926d62d7a42592dc25f0

This is a fantastic set of files to really test celeri on and I'm really excited to get the classic Japan model working here!

— Reply to this email directly, view it on GitHub https://github.com/brendanjmeade/celeri/issues/73#issuecomment-1034004167, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZQE4QN2Z4O5RTKEG7K3PLU2KOWPANCNFSM5NSL44TQ . You are receiving this because you were mentioned.Message ID: @.***>

brendanjmeade commented 2 years ago

All packed nicely in celeri_solve command line interface and still nicely debuggable in notebooks.