ut-beg-texnet / NonLinLoc

Probabilistic, Non-Linear, Global-Search Earthquake Location in 3D Media
http://www.alomax.net/nlloc/docs
GNU General Public License v3.0
96 stars 32 forks source link

Gridding at depth #52

Open OliviaPS opened 3 months ago

OliviaPS commented 3 months ago

Hi @alomax,

I am using NLL to get locations of a large region in New Zealand. I am using a NZ 3D model (https://zenodo.org/records/6568301) that has layers of different thickness. I am getting "gridding" at depth that I think matches the 1km grids from VGGRID (I am only plotting the first 50km). I have tried changing the cells and max_num_nodes from LOCSEARCH, but I am still getting those lines.

Do you have any advice on how to avoid this lines?

Best regards

Olivia

gridding NonLinLoc_3d_AF.txt

alomax commented 3 months ago

Hello Olivia,

Commonly, gridding in the hypocenters reflects some trade-off between 1) sharp changes in the velocity or gradient of velocity at grid boundaries (which lead to higher-order discontinuities in travel-time which perturb and/or attract hypoceners), 2) poorly constrained events (so the effective grid search is very coarse), and 3) too large pick or travel-time uncertainties (also give a coarse effective grid search). 4) coarse or erroneous grid search settings (the octree search is highly recommended, I only use octree).

(1) can be addressed by using as smooth as possible a model and/or finer model/travel-time gridding. (2) requires a good distribution of stations with precise and accurate P and S picks and uncertainties. (3) requires careful and realistic setting of pick uncertainties for the observations, and travel-time uncertainty (e.g. through LOCGAU2) (4) requires, for octree LOCSEARCH OCT that the initial grid is not too coarse (I usually set init_num_cells_x, init_num_cells_y, init_num_cells_z to be proportional to the respective LOCGRID dimensions and so the product init_num_cells_x*init_num_cells_y*init_num_cells_z is about 1/4 to 1/3 max_num_nodes; that max_num_nodes is high (I typically use 50000); and that min_node_size is less than or much less than the final location precision of interest.

The constraint of the locations on the optimal hypocenter can be assessed through indicators like errH, errZ, the ellipsoid major axis half-length (Len3) and the difference between the maximum likelihood and expectation hypocenters, or better, by visualizing the pdf scatter cloud, ellipsoid, and maximum likelihood vs expectation hypocenter.

If gridding is visible in the final locations after addressing possible causes of increased gridding, then the gridding can be "hidden" by plotting the expectation hypocenters instead of maximum likelihood. This sometimes give a more informative distribution of hypocenters and prevents overlapping plotting of multiple hypocenters.

There may, of course be other reasons for gridding. You can supply more details of your configuration if the above does not help.

Best regards, Anthony

OliviaPS commented 3 months ago

Hi Anthony,

Thank you for your answer!

I think 1) and 2) could be the cause of the gridding. My stations are mostly in a line, so I have poor azimuthal coverage. For the uncertainties, I am using the same uncertainty for all my picks. The picks were obtained automatically using EQTransformer, so I do not have a way of individually adding weights or uncertainties. I have been using the standard deviation of some test that I did when comparing EQTransformer picks vs manual picks in the area of study. Is there something you can recommend when using picks that do not have uncertainties?

If I use the expected hypocenters, do I need to recalculate lat/lon/depth and origin errors?

I am attaching my configuration file as a .txt and here are some of the values I am using: NonLinLoc_3d_AF.txt

VGGRID 600 800 150 -300 -100.0 -4.0 1.0 1.0 1.0 SLOW_LEN
LOCSEARCH OCT 50 50 30 0.001 200000 1000 1 0 LOCGRID 600 800 150 -300 -100.0 -4.0 1. 1. 1. PROB_DENSITY SAVE LOCMETH EDT_OT_WT 9999.0 4 -1 -1 -1 -1 -1.0 0 LOCGAU 0.2 10.0 LOCGAU2 0.01 0.05 0.5

stations

Thank you!

Best regards,

Olivia

alomax commented 3 months ago

Hello Olivia,

Is there something you can recommend when using picks that do not have uncertainties?

The std dev EQTransformer picks vs manual picks may be a good estimate if there is no evidence of a bias in time between the two picking methods. For example, if the ML picks are generally delayed with respect to manual.

What pick uncertainties are you using? For 100Hz recording and high signal-to-noise, P uncertainty can be as low as 0.02 sec.

I often take a look at the residuals for some events and set the uncertainties to ≥ a typical residual value. Usually I have S uncertainty ~2x P uncertainty.

I am attaching my configuration file as a .txt and here are some of the values I am using

The settings look fine. But given the dimensions of your LOCGRID would set: LOCSEARCH OCT 60 80 15 0.001 200000 1000 1 0 so that the octree search cells are near cubic. This will likely have little or no effect on the locations. 200000 samples is high, if the locations take a long time you can probably use 100000 with the above octree initialization (608015=72000).

My stations are mostly in a line

You might look at the location uncertainties and if there is gridding for locations where there is good station azimuth coverage (e.g. within the wider group of red dots). This would be a test to see if the station distribution in some areas is part of the problem.

If I use the expected hypocenters, do I need to recalculate lat/lon/depth and origin errors?

No, the error statistics are generated from the pdf, so they always refer to the expectation hypocenter. However, the HYPOCENTER and GEOGRAPHIC values and take-off angles refer by default to the maximum likelihood. To force these to use the expectation hypocenter, use LOCHYPOUT ... SAVE_NLLOC_EXPECTATION ...

I hope this helps some more...

Best regards,

Anthony