Open brendanjmeade opened 1 year ago
I really like this list! A few thoughts below:
pyGMT
operations, so I think constructing a .py
with plotting routines is the way to go. For the in-notebook visualization, I wonder if something like cartopy
would be worth implementing. Thanks for sharing @jploveless
- The n_eigenvalues for each mesh sounds great, and it's a nice analog for the individual mesh smoothing weights
Agreed and the eigenvalue approach also radically improves condition numbers and reduces memory requirements because there is no smoothing matrix.
- I will get back to work on GMT plotting. I know we ran into some challenges before in in-notebook performance of some of the
pyGMT
operations, so I think constructing a.py
with plotting routines is the way to go. For the in-notebook visualization, I wonder if something likecartopy
would be worth implementing.
Emily and I have some pretty nice results with straight matplotlib that we're happy to share. My current perspective is that pygmt is really worth it because the output quality is excellent, as is support for map projections. I also that Matplotlib is worth it because it's super easy to use. I feel like Cartopy is stuck between the two with less of an obvious use case. I also agree that doing this outside of a notebook is a good idea. That would also be consistent with the post-processing step that I mentioned in the issue.
- I'd really like to chat about potential time-dependent approaches.
Definitely!
- I have a student who is working a little on automatic mesh generation from the reference GBM segment file. I think it'll take some time before we have something useable, but we're at least working in Python now and we are exploring scaling the element size based on local density of GPS stations.
That sounds exciting! I will note that with the EV approach, the notion of mesh spacing becomes somewhat less important because the eigenvalues are only very weakly dependent on mesh resolution.
Emily and I have some pretty nice results with straight matplotlib that we're happy to share. My current perspective is that pygmt is really worth it because the output quality is excellent, as is support for map projections. I also that Matplotlib is worth it because it's super easy to use. I feel like Cartopy is stuck between the two with less of an obvious use case. I also agree that doing this outside of a notebook is a good idea. That would also be consistent with the post-processing step that I mentioned in the issue.
This is a good point. Visualizations within notebooks as just straight matplotlib and outside of notebooks with pyGMT seems like a better approach, since they really do serve different purposes.
That sounds exciting! I will note that with the EV approach, the notion of mesh spacing becomes somewhat less important because the eigenvalues are only very weakly dependent on mesh resolution.
In thinking about this, I was also considering computation time for partials, but maybe cutde
makes that a nearly irrelevant concern! I do think that if we used smoothing or some other regularization method, reducing the number of triangles would be helpful, but perhaps there's an advantage to keeping them more uniform in size for the radial basis functions?
This is a good point. Visualizations within notebooks as just straight matplotlib and outside of notebooks with pyGMT seems like a better approach, since they really do serve different purposes.
I agree!
In thinking about this, I was also considering computation time for partials, but maybe
cutde
makes that a nearly irrelevant concern!
That's a great point, and also one worth discussing. cutde
on its own is a rocket ship capable of doing 10 million tde to station calculations per second. However, as currently used in celeri
we are orders of magnitude slower than that. The primary reason for this is that we have a separate map projection for every TDE. The map projection in and of itself isn't too slow because we're calling out the proj binary. The challenge is that with map projections, the station coordinates that are used for a calculation are effectively different for every TDE. This means that we'd have to think about how to package things properly for cutde
if we wanted to try one giant and very fast call. There might also be memory limits that we hit.
I do think that if we used smoothing or some other regularization method, reducing the number of triangles would be helpful, but perhaps there's an advantage to keeping them more uniform in size for the radial basis functions?
Definitely agree. I actually don't know how different the eigenvectors would look if we densified just one part of the mesh. That would be very interesting (and easy) to look at!
It's time to take stock of where we are and dream about what's next. Here's a list of some things that might be next targets.
Thoughts @jploveless and @Emilycm-97 ?