iris-edu / irisws-syngine

Project components for the IRIS Synthetics Engine (irisws-syngine) web service
GNU General Public License v2.0
2 stars 0 forks source link

Final Set of 1D Models #11

Closed krischer closed 8 years ago

krischer commented 8 years ago

Current Set:

prem_ani    20-100  4.87    1797    vertical and horizontal     20s_PREM_ANI_FORCES for testing only    0.6 GB  (link)
ak135f      2-100   0.487   3700    vertical and horizontal         1.0 TB  (link)
iasp91      2-100   0.483   3700    vertical and horizontal         1.3 TB  (link)
prem_iso    2-100   0.488   3700    vertical and horizontal     isotropic   1.1 TB  (link)

prem_iso and ak135f are buggy and need to be replaced.

Current wishlist as we understand:

We kind of lost track the models you want. Can you please comment and we try to make it happen until AGU. We'll just bring along one or more hard discs.

CTrabant commented 8 years ago

What do you mean _premiso and ak135f are buggy? We have been testing and presenting this system for review, understanding just how off our evaluation has been is important.

Alex will confirm the final list of models.

sstaehler commented 8 years ago

@CTrabant on ak135f: There was a small error with the velocity model as implemented in AxiSEM, see https://github.com/geodynamics/axisem/issues/36 It's fixed, but we'd need to recalculate the database. But note that the velocity model of ak135f (with attenuation) is not the same as the one of ak135, which is implemented in standard taup, for example.

On PREM_iso, I am not quite sure what @krischer is referring to.

martinvandriel commented 8 years ago

In prem, Q_mu in the LVZ (radius 6151 to 6291.0) is 600 instead of 80. I bet the difference, will be minor but we need to fix it and recompute the DB anyway.

sstaehler commented 8 years ago

Has there been a decision on the exact implementation of PREM oceanic crust?

Also, what about any of the other models in the IRIS EMC, like STW105 or TNA? http://ds.iris.edu/ds/products/emc-referencemodels/ Especially the latter might be interesting as a 0.5s database for regional studies in the US, maybe with maximum distances of 60 degree and only 1800s duration.

And didn't we plan to include a 1s database, vertical component only to 100km depth?

I would propose (changes to @krischer's plan bold):

krischer commented 8 years ago

The TNA/SNA models unfortunately only specify vs so we could only do some kind of frankenmodel with a constant vp/vs and density + Q from somewhere else. Would be cool to have some regional US model and/or European model though.

Instead of "AK135f 1s period, 1800 s duration, vertical only" would it be possible to the same for all 3 components but 3600 s duration and only 100 km max depth? Should be around 1-2 TB.

sstaehler commented 8 years ago

The TNA/SNA models unfortunately only specify vs so we could only do some kind of frankenmodel with a constant vp/vs and density + Q from somewhere else. Would be cool to have some regional US model and/or European model though.

Yes, I'm also not too fond on creating a P-model from a pure S-model, but that's what a lot of people are doing. I'd have to check literature whether there's a canonical vp-version of TNA. Density probably does not matter much for this short periods, Q is another story. For Europe, I'm not aware of any 1D velocity model that is more or less accepted.

Instead of "AK135f 1s period, 1800 s duration, vertical only" would it be possible to the same for all 3 components but 3600 s duration and only 100 km max depth? Should be around 1-2 TB.

Sounds good, let the Americans decide.

alexhutko commented 8 years ago

Our current bug-free set:
prem_ani 20-100 4.87 1797 vertical and horizontal 20s_PREM_ANI_FORCES for testing only 0.6 GB (link) iasp91 2-100 0.483 3700 vertical and horizontal 1.3 TB (link)

Our current buggy set: ak135f 2-100 0.487 3700 vertical and horizontal 1.0 TB (link) prem_iso 2-100 0.488 3700 vertical and horizontal isotropic 1.1 TB (link)

Our wishlist: PREM continential crust 2s period, 3600 s duration PREM oceanic crust 2s period, 3600 s duration PREM 5s and 10s resolution, 3600 s duration for the FFM service (we’re not sure about anything FFM-related yet) PREM 10 s resolution, 18000 s duration for large earthquake coda studies AK135f & IASP91 2s period, 3600 s duration AK135f 1s period, 1800 s duration, vertical only

Our priorities: 1) Release before AGU. 2) Let’s start with the lower resolution ones, which will be quick to generate & upload, then redo prem_iso 2s, then 1s ak135f, then the rest. hopefully: -make the resolution such that dt is slightly higher than a pretty number, i.e. 0.51 instead of 0.49, so it can be upsampled to 0.5 in instaseis. -make sure the durations are slightly longer and not shorter than a pretty number, i.e. 1801 instead of 1797

Simon, you mentioned that we should generate the databases. Past experience with axisem has taught us that if this is the case, it’ll go much faster if one of you can help us by logging in locally and setting things up for us. That way you can debug and add any dependencies that are needed rather than playing email tag for weeks. Or you can generate them in Germany and fedex us the externals, we can pay for shipping if needed just let me know. I think one of these is the best route given that we have only 3-4 weeks (hopefully less) before target release. Martin has access to our dpstage, I can give anyone else access if needed.

If one of the Germans doesn’t have time to generate the models, maybe we should have a quick conference call. Please let us know.

Thanks, The Americans

krischer commented 8 years ago

Including @tnissen to make him aware of this.

-make the resolution such that dt is slightly higher than a pretty number, i.e. 0.51 instead of 0.49, so it can be upsampled to 0.5 in instaseis.

@sstaehler and @martinvandriel can try to do this ;-) Nonetheless I would propose to fix the minimum sampling rate/maximum dt to something like 5 times the mesh sampling rate. If people request velocity/acceleration data at the low sampling rate the time domain derivatives we have act as a strong low pass filter. As they are applied after the resampling this is no longer a problem.

Regarding an continental & oceanic crust, for the solid parts I think Gabi's averages from Crust 2.0 probably work well. The fluid layer is something that must be decided on your side of the Atlantic. Water layer on top of oceanic crust above and 'near' the source + receiver on something solid (continental?) = beyond our axisem comprehension.

AxiSEM does not yet have water layers/islands so this cannot be done until AGU.

In any case, here is a suggestion for a continental PREM, anyone is free to offer alternatives. http://igppweb.ucsd.edu/~gabi/crust/crust2-averages.txt

This will be tough. They have very thin (down to 80m) ice/water layers which cannot be modelled in AxiSEM.

sstaehler commented 8 years ago

In any case, here is a suggestion for a continental PREM, anyone is free to offer alternatives. http://igppweb.ucsd.edu/~gabi/crust/crust2-averages.txt

This will be tough. They have very thin (down to 80m) ice/water layers which cannot be modelled in AxiSEM.

I see three potential problems with Gabi's averages:

  1. (minor) The mantle they connect to (vp=8.15 km/s...) is no PREM mantle. Evaluating the PREM polynomial at the depth of the boundary gives lower velocities (8.11 km/s). Anyway, we could just connect it to the PREM polynomial mantle.
  2. (intermediate) The water layer has to be removed and be replaced by another one (no fluid layer on top in AxiSEM). I would also remove the "ice" layer, which was not a global feature last time I looked out of the window and will be generally obsolete in a few years. So start with the sediment layers.
  3. (major) The 3 following layers (soft sediments, hard sediments, upper crust) are very thin (down to 260m), which enforces a very small time step for the simulation. I would therefore make them larger, one kilometer at least.

In the end, we are creating new models. I know close to nothing about surface waves, but I would expect them to be very sensitive to these layers.

Here are two proposals. They are tough to mesh due to the thin layers with low velocities, but it works with a bit of Spucke. cont ocean_comparison

krischer commented 8 years ago

I would also remove the "ice" layer, which was not a global feature last time I looked out of the window and will be generally obsolete in a few years.

Haha :-) Great way to future proof syngine!

martinvandriel commented 8 years ago

I don't think we should be inventing new models. Anyway, these averages seem to be to detailed to me to make sense on a global scale.

sstaehler commented 8 years ago

I don't think we should be inventing new models. Anyway, these averages seem to be to detailed to me to make sense on a global scale.

I kind of agree and we had the same discussion in the AxiSEM group one year ago. @CTrabant and @alexhutko should decide.

sstaehler commented 8 years ago

Well, at least it would be possible to mesh that (PREM on the left, oceanic crust on the right): prem_vs_crust20

In total, 10% more elements than native PREM and a 40% smaller time step, so in total 50% more expensive to create the database. From my side, we can do that, but I do not know how useful it is.

martinvandriel commented 8 years ago

Which model is this now?

sstaehler commented 8 years ago

Which model is this now?

left: prem_ani, right: prem_crust20_ocean, which is the CRUST2.0 ocean average without water and ice layer, soft sediments to surface and hard sediments one km thicker, velocities are plotted above. I created a branch in the AxiSEM repo with these models: geodynamics/axisem@a085bc6eaf7c2d8e9dd91ae96011e2faced4a801

tnissen commented 8 years ago

We had his discussion indeed, but no consensus... I certainly disagree: Any time we discretize and take a band-passed version of a published model, we basically "invent new models"... even if they're offered in a parameterized fashion, they don't necessarily satisfy data in our parameter regime (e.g. PREM hasn't been constructed using 1Hz-body waves).

The point is the following:

We want to approximate data with averaged models. Therefore, we want to offer a number of diverse models which can do that for a diverse set of source-receiver paths. We know for a fact that all oceanic paths are not well approximated by our existent non-oceanic models. Therefore I'd say we offer our best effort to approximate an oceanic 1D model, which could be one such as Gabi's without the miniature layers. Everyone in a discrete world does that, including all of SPECFEM's crustal implementations, Fichtner's effective crust, any 3D model in particular), and I believe (as much as Kennett, Nolet, Ritsema and numerous other folks) that a diverse range of 1D models (in particular for the crust and lithosphere) is more useful than a narrow range.

If we do not have anything to offer for oceanic paths, it'll be the immediate criticism (and rightly so). Everyone knows 1D models are approximations no matter what, but if we leave out the majority of global path coverage and by far most significant waveform differences amongst our 7 models and instead focus on second-order effects such as attenuation/anisotropy and slightly differing mantle profiles, then I'd understand people raising their eyebrows. Yes, everything is a compromise, but I think it's more sensible to approximate first-order effects than entirely neglecting them to focus on second-order effects. 10-20s global surface waves will not sense much from layers of 100m thickness, but they will see huge differences in whether we have an 8km or 30km crust...

I like Simon's saliva profiles. As long as they're clearly labeled as an approximation to continental and oceanic CRUST2.0, I don't see why this should be an issue. In the end its nothing but a service, and offering colleagues the breadth of such models will not, by definition, limit the scope of the service. We can't do more than we're able to do, but we also shouldn't do less. Do we expect people to read/understand what 1D model they use? Yes. If they don't like our best-effort patchwork oceanic profile, they don't have to use it. Some will find use of this though, and that is the point... even if some of use have personal objections to using such models...it's about offering a service to enable new research for an (educated) community.

On 11/9/15 10:15, Simon Stähler wrote:

I don't think we should be inventing new models. Anyway, these
averages seem to be to detailed to me to make sense on a global scale.
I kind of agree and we had the same discussion in the AxiSEM group
one year ago. @CTrabant <https://github.com/CTrabant> and
@alexhutko <https://github.com/alexhutko> should decide.

— Reply to this email directly or view it on GitHub https://github.com/iris-edu/irisws-syngine/issues/11#issuecomment-155020203.

Tarje

<>--<>--<>--<>--<>--<> Dept. of Earth Sciences Oxford University South Parks Road Oxford OX1 3AN; UK tel: +44 1865 282149 web: seis.earth.ox.ac.uk <>--<>--<>--<>--<>--<>

sstaehler commented 8 years ago

@tnissen: Yes, true, I'd also like to have an oceanic crust model. The only point is that I am not educated to create one. Do you want to contact Jeroen R. if he has an opinion on a good average oceanic crust model?

alexhutko commented 8 years ago

We echo Tarje's sentiments regarding models.

Also, I'll add that I think there are enough features (e.g. custom stf, ffm) that will likely work their way into syngine such that a second announcement will be made at a later date. We can highlight any new or higher res versions of models at that time as well as on the main product page in the models section.

We appreciate Simon's efforts currently underway here at the DMC. The Americans

sstaehler commented 8 years ago

Okay, the DPStage machine is ready for production runs now and I'll start.

-make the resolution such that dt is slightly higher than a pretty number, i.e. 0.51 instead of 0.49, so it can be upsampled to 0.5 in instaseis.

@sstaehler and @martinvandriel can try to do this ;-) Nonetheless I would propose to fix the minimum sampling rate/maximum dt to something like 5 times the mesh sampling rate. If people request velocity/acceleration data at the low sampling rate the time domain derivatives we have act as a strong low pass filter. As they are applied after the resampling this is no longer a problem.

@alexhutko: Is this still an issue? It's a bit difficult to set the sampling rate to an explicit value, so I'd avoid it, if not necessary.

alexhutko commented 8 years ago

Is it painful to make the database something like 1.97 s (4.9s) rather than 2.0 s (5.0s) resolution, which may likely push the resulting dt from 0.48? to >0.5s ? We would still advertise this a 2 s model.

sstaehler commented 8 years ago

Is it painful to make the database something like 1.97 s (4.9s) rather than 2.0 s (5.0s) resolution, which may likely push the resulting dt from 0.48? to >0.5s ? We would still advertise this a 2 s model.

I'm confused: Do you want a larger dt (by increasing the mesh period from 2s to 2.1s)? Or a smaller dt (by reducing the mesh period from 2s to 1.97s)?

alexhutko commented 8 years ago

Yes, you are correct. Doing something like 2.1s resolution so that the final traces can be upsampled from 0.51 s to 0.5s within instaseis.

Sorry for the confusion.

sstaehler commented 8 years ago

On 12/11/15 20:34, alexhutko wrote:

Yes, you are correct. Doing something like 2.1s resolution so that the final traces can be upsampled from 0.51 s to 0.5s within instaseis.

Sorry for the confusion.

Okay, I'll add 5% to all periods.

tnissen commented 8 years ago

I did talk to Ritsema (a while back), he only commented on PREM in the oceans... in that the crust should be ok with a 10km oceanic crust layer, but the S-wave speed in the asthenosphere is a bit slower in the oceans than in PREM, and PREM is quite a bit off in the upper mantle for P in the oceans.

I would cautiously propose that a PREM model but with an oceanic 10km crust mimics our first/better-than-nothing approximation to an ocean.

For the longer run, I'm warming up to the idea of reinverting .... but I think that's part of the REM efforts around Ved Lekic at Maryland. I'll contact him too.

On 11/10/15 22:47, Simon Stähler wrote:

@tnissen https://github.com/tnissen: Yes, true, I'd also like to have an oceanic crust model. The only point is that I am not educated to create one. Do you want to contact Jeroen R. if he has an opinion on a good average oceanic crust model?

— Reply to this email directly or view it on GitHub https://github.com/iris-edu/irisws-syngine/issues/11#issuecomment-155577064.

Tarje

<>--<>--<>--<>--<>--<> Dept. of Earth Sciences Oxford University South Parks Road Oxford OX1 3AN; UK tel: +44 1865 282149 web: seis.earth.ox.ac.uk <>--<>--<>--<>--<>--<>

alexhutko commented 8 years ago

Hi Simon,

Thanks for prem_ani_5s and prem_ani_10s.

As for what models should be run at the DMC next, maybe it's best to just run the prem ocean & continental models here at a low resolution (5s?) so we have a variety for release. Afterwards, 2s versions can cook in the background and eventually replace the 5s versions. Does this sound OK if it's not too much work for you?

For the models you just made, regarding dt: those are fine

regarding nsamples: can you please make models have duration = intended duration + source halfshift + N so that for a "18000s" duration model, a syngine request for 18000s duration is valid? Thinking about the future possibility of including finite width STFs or FFMs, would this influence N? I'd like to update the advertised durations and err on the safe side. http://ds.iris.edu/ds/products/syngine/#models

valid: http://service.iris.edu/irisws/syngine/1/query?network=IU&station=ANMO&components=Z&eventid=GCMT:C201005241618A&model=prem_ani_10s&label=case1&endtime=17982

invalid: http://service.iris.edu/irisws/syngine/1/query?network=IU&station=ANMO&components=Z&eventid=GCMT:C201005241618A&model=prem_ani_10s&label=case2&endtime=17983

duration

krischer commented 8 years ago

A comment: The maximum possible length of a seismogram measured from the peak of the sliprate is:

seismogram_length - source_shift - (kernel_width * dt)

A reasonable maximum kernel_width is something around 30 I guess.

regarding nsamples: can you please make models have duration = intended duration + source halfshift + N so that for a "18000s" duration model, a syngine request for 18000s duration is valid? Thinking about the future possibility of including finite width STFs or FFMs, would this influence N? I'd like to update the advertised durations and err on the safe side.

It does not influence FFM models as it internally shift the first source to time 0 and then the same logic as for normal seismograms applies.

For custom STFs you cannot guarantee it in any case as people might just choose to set the origin of their seismograms to the middle of their STF. For the planned parameterized STF I think adding a buffer of two minutes should be sufficient. The largest half-duration in the GCMT database I could find is 95 seconds.

Summarizing I think the following would work for all cases and strongly errs on the safe side:

nsamples = desired_length_in_samples + 30 * dt + source_shift_sampes + 120 / dt

chad-earthscope commented 8 years ago

Syngine is released.