seismology / mc_kernel

Calculate seismic sensitivity kernels using Axisem
GNU General Public License v3.0
24 stars 8 forks source link

Memory requirement for runs with high frequencies #32

Closed sstaehler closed 9 years ago

sstaehler commented 9 years ago

A problem I encountered on SuperMUC for runs with periods of 2s (testing kernel convergence) is the memory requirement. Supermuc offers 1.6GB per node, which is a usual value, I guess. Since the AxiSEM mesh of a 2s run has 3,8E7 GLL points, this means that storing the 12 mesh parameters for each GLL point alone eats up 1.7 GB or all of the memory. Now it is not really necessary to waste that much memory on saving the model parameters. They only depend on depth and are set by some 100 parameters (the polynomial coefficients for all layers and all parameters in PREM for example).

So, what is the best solution for this?

Any suggestions?

martinvandriel commented 9 years ago

Why do you load them all on opening the file? Could this not be done on demand?

sstaehler commented 9 years ago

Because avoiding random IO access during runtime? The speedup on a large memory machine is quite high

martinvandriel commented 9 years ago

Memoization?

sstaehler commented 9 years ago

AAAAAAAAARGH

sstaehler commented 9 years ago

Yes, would be a solution. But it still means unnecessary file accesses

martinvandriel commented 9 years ago

well, not too many, because with 100 MB you can already store a good chunk of the model. Anyway, I think this is the cleanest solution compared to getting the 1D model again from some other source.

sstaehler commented 9 years ago

True, but still means some additional file accesses.

alternative: Read the model from the file and re-parametrize it over depth. Disadvantage: Might assign wrong model parameters to single GLL points. Would that be a problem?