brendanjmeade / celeri

Next generation earthquake cycle kinematics
BSD 3-Clause "New" or "Revised" License
24 stars 7 forks source link

Fixes to triangular smoothing matrix #38

Closed jploveless closed 3 years ago

jploveless commented 3 years ago

This seems to fix the triangular smoothing matrix from #37. The new results, plotted in the notebook, are similar to those from the Matlab equivalent (though smaller in magnitude since distance scaling in celeri uses meters, not km). There were some funny indexing things that stemmed from changing meshes.share within the distance calculation, but I just do the distance calculation differently so that it avoid changing anything about meshes.share.

brendanjmeade commented 3 years ago

Thanks, Jack! This is working for me and now merged. I just submitted another commit (https://github.com/brendanjmeade/celeri/commit/91dcecc7ae8cd4bc439b18024447ac01d8f2c636) that takes what you built and makes smoothing_matrix a sparse matrix. I also used directing indexing with "k" and "m" to avoid the flattening and ravel approach, which I don't think is supported by sparse matrices. I get what looks like the same answer! Figure 12

jploveless commented 3 years ago

That's interesting: direct indexing would not work in Matlab because it'd try to fill the full 3-by-3 block identified by the k and m arrays with the scalar value of off_diagonal_terms[i, j]. I'm not sure how Python knows to put that scalar along the main diagonal, but it definitely simplifies things vs. the ravel/flat approach.

Sorry, I had made this a full matrix because I always had to call full in Matlab back when sparse matrix support was kinda minimal. If there are good tools in Python for inverting full/sparse combos, it'll be great to have the memory savings by keeping smoothing sparse.

brendanjmeade commented 3 years ago

I'm not sure that there are good tools for inverting sparse matrices but I figured it was easier to just store them this way. We can promote them to full numpy matrices with smooth_matrix.toarray(). With the SVD approach on the horizon I think the end game will to just calculate the triangular partials and smoothing matrix for one mesh at a time, do the weighted eigenvalue approach, and then discard the full triangular partials and smoothing matrix because we won't need them anymore. Not having to store all of these all the time will save a huge amount of memory!