ut-beg-texnet / NonLinLoc

Probabilistic, Non-Linear, Global-Search Earthquake Location in 3D Media
http://www.alomax.net/nlloc/docs
GNU General Public License v3.0
92 stars 32 forks source link

drawbacks of large size region #45

Open sebmagia opened 5 months ago

sebmagia commented 5 months ago

I am using NonLinLoc to locate earthquake hypocenters in the East Anatolian Fault. The region is quite big, spanning 500 km in the easting direction, 370 km in the northing direction, and 60 km depth. Particularly for my case study, I am interested in reducing the depth uncertainty of the earthquakes. I read from similar Issues here in Github that the error ellipses may become very elongated specially in the z-direction. Is there a way to reduce such elongation?

Also, I'm using a LAMBERT projection for the region, do you recommend another one? I was thinking AZIMUTHAL EQUIDIST might work too, but the region is perhaps too big.

TRANS LAMBERT WGS-84 35.90 34.9 36.5 38.75 0.0

I'm using these parameters for LOCGRID and LOCSEARCH.

LOCSEARCH OCT 20 20 30 0.0001 50000 5000 0 1 LOCGRID 2000 1480 250 0.0 0.0 -2.2 0.25 0.25 0.25 PROB_DENSITY SAVE

Perhaps I could also change a bit the LOCGAU and LOCGAU2 parameters, but honestly here I don't have prior knowledge on which values should I set.

LOCGAU 0.05 0.0 LOCGAU2 0.02 0.02 1.0

Best regards

alomax commented 5 months ago

Hello,

Interesting and important questions!

Is there a way to reduce such elongation?

The large depth error and uncertainty relative to epicenter uncertainty in arrival time location is mainly due to the stations being on the surface and not all around the source events, e.g. we do not have stations deep in the earth below the events. There is in general no practical way to reduce this problem.

Secondarily, depth error is related to velocity model error, including error in Vp/Vs. This source of error may or may not show up in the depth uncertainty. The general solution to this error is to have good P and S readings from station over the events (e.g. with epicenter distance < depth) and very accurate P and S velocity models. Not easy, but possible in some cases, e.g. where there are deep well logs, and/or the geology is not too complex and Vp and Vs only increase with depth, and/or with a good 3D model or station corrections (e.g. SSST corrections). In general it is possible to reduce relative error in depth (e.g. with SSST or differential relocation), but absolute depth usually remains biased.

There are also other sources of error, such as picking error. The often good azimuthal and distance coverage of stations means all these sources of error are reduced for epicenter.

See this study and references therein: Husen, S., & Hardebeck, J. (2010). Earthquake location accuracy. Community Online Resource for Statistical Seismicity Analysis. https://doi.org/10.5078/CORSSA-55815573

Also, I'm using a LAMBERT projection for the region, do you recommend another one? I was thinking AZIMUTHAL EQUIDIST might work too, but the region is perhaps too big.

LAMBERT is a good choice since the seismicity is likely distributed throughout the study area. AZIMUTHAL EQUIDIST is optimal when the target seismicity is concentrated in a small area relative to the extent of the station coverage.

I think that in theory 300-500 km is maximum for a flat earth study geometry. But more important is that the study projection is the same or as close as possible to the projection of the model used for generating travel-time tables. Then travel-times and distances for relocations and in the model are consistent.

I'm using these parameters for LOCGRID and LOCSEARCH. ...

I have run NLL relocations for the East Anatolian Fault area: https://zenodo.org/records/8089273. Here is my basic NLL control file for this study: Turkey_2023_Acarel2019smooth.in.txt

You can see settings that may answer your questions and help with other settings. Note that I always try and set the LOCSEARCH OCT inital grid size xyz in proportion to the LOGGRID size xyz, so that the octree search uses cubic cells.

Perhaps I could also change a bit the LOCGAU and LOCGAU2

These settings are somewhat arbitrary, as they are for Gaussian, aleatoric-type error models of travel-time uncertainty, while the velocity model and resulting travel-time errors are probably far from Gaussian. For example, false, sharp interfaces in the model, or missing 3D structures or lateral discontinuities in the true earth can give a large, epistemic-type error.

For both assigned pick uncertainty and the LOCGAU* uncertainty settings, one indication they are too small is if the NLL location PDS's for many events are very complex, have multiple maximum or a split into separate clusters.

LOCGAU2 SigmaTfraction sets a percentage-of-travel-time error and I usually use around 0.02 - 0.05 (2-5%). This parameter adjusts automatically for P and S, unlike the other parameters and LOCGAU which is a shortcoming the current NLL implementation. SigmaTmin might be similar to the typical residual of picks used for location with high weight (non-outlier picks) SigmaTmax might be larger than the largest expected travel-time error (which is likely difficult to determine...)

I hope this helps!

Best regards,

Anthony

sebmagia commented 5 months ago

Dear Anthony,

Thank you very much for your response. With your advice and control file, I tried a few configurations and I am quite content with the results. But there are some features of my earthquake locations I wish to improve.

I was plotting some seismicity cross-sections and it's clear that there are unrealistic preferred depths (e.g roughly at 2, 3 and 4 km), as shown in the figure attached. Do you know what could be the source of this behavior? I tried a few different configurations but I could not solve this issue.

I attach my control_file from which the locations were computed. C1C2full

nlloc_gazi_seba_new_ng11.in.txt

Best regards.

alomax commented 5 months ago

Hello, Sorry for my delayed reply, I was traveling the past weeks.

I think the depth problem may be due to your using a quite large, 1 km grid for VGGRID, VGGRID 2 1001 81 0.0 0.0 -3.0 1.0 1.0 1.0 SLOW_LEN while your velocity model is layers with 0.25 km thickness. So perhaps it would help to grid the velocity model so it corresponds exactly to the layer interface depths: VGGRID 2 4001 321 0.0 0.0 -3.0 0.25 0.25 0.25 SLOW_LEN or event finer to better capture details of the wavefronts and travel-times, e.g.: VGGRID 2 20001 1601 0.0 0.0 -3.0 0.05 0.05 0.05 SLOW_LEN Best regards, Anthony

sebmagia commented 4 months ago

Dear Anthony,

Thank you. These changes solved the issue. I have one last question to be sure: Is the LOCGRID resolution (somewhat) independent of the VGGRID resolution? For example, in the file you uploaded you used a VGGRID resolution of 0.1 and a LOCGRID of 2.0. I feel that increasing the LOCGRID resolution should increase the solution precision but this also makes the computation time longer.

Best regards,

sebmagia

alomax commented 4 months ago

Hi sebmagia,

The LOCGRID cell size is a carry-over from the first version of NLLoc which only had a nested grid search (which I never use now, though may be useful for detailed study of the location confidence distribution). When using the Octree search (recommended) or Metropolis (I never use this), the cell size, along with number cells x,y,z only serves to define the total extent of the location search volume.

The LOCSEARCH OCT parameters, however, are important for setting the cell aspect ratio (should be near cubic, in general) and number of initial cells in x,y,z (e.g. if 50000 max_num_nodes cells, I try to set init_num_cells_x init_num_cells_y init_num_cells_z to around 10-20k cells). The setting of these parameters depends on the LOCGRID search extent settings.

Anthony