opengeospatial / CRS-Deformation-Models

CRS Domain Working Group Deformation Models project
6 stars 6 forks source link

How should a physical discontinuity in the deformation field be represented in the functional model? #29

Closed ccrook closed 2 years ago

ccrook commented 3 years ago

In the discussion of discontinuities at the edge of spatial models #25 during the meeting of 25 Jan 2021 it became clear that there is also a question about discontinuity in the deformation field itself, eg surface faulting. This is a different question to discontinuity that is an artefact of the model production (eg boundary of nested grids).

During the meeting it was suggested that:

This issue is to raise the question of how this should be represented in the functional model

How should a physical discontinuity in the deformation field be represented in the functional model?

ccrook commented 3 years ago

Note that the HTDP model #28 includes fault plane models that intersect the surface (ie top of the fault is at depth 0), so this point motion model does include a discontinuity in the calculated deformation field.

ccrook commented 3 years ago

JF - some form of flags to indicate issues with deformation field in an area eg flag on nodes (numeric or boolean depending on capabilities of transport) software could assess number of affected nodes at interpolation point. KK separate models for each side of fault - no-mans land between. CC/JF issue at ends ? could use NaN on nodes adjacent to fault CC NaN may be undesirable for users where transformation fails in affected area - what should they do with the data?

ccrook commented 3 years ago

I've added a discussion of a "quality" parameter to the strawman document. There is a discussion reflecting some of the points above and also raising two other issues.

I have included two options for representing quality issues at the moment - one using a quality value at nodes

and one using spatial model metadata

ccrook commented 2 years ago

Email from @ccrook on 18 Jan 2022 with minor edits

.. in terms of getting the specification to a near final state in a timely way it is for me a blocking issue. In many ways to me the inclusion or not of the "quality flag" at nodes feels more in the realm of research rather than specification at the moment which in my mind takes it beyond the middle of this year, by when I'd like our specification to be finalised apart from editing/OGC review.

The challenge for the specification is how to explicitly define such a thing and describe how it is used in the context of a transformation. At the moment GIS software doesn't handle uncertainty in transformations, let alone more esoteric quality information. Kevin agreed that there is value for users in being able to visualise in a GIS where/when such issues occur, but I am not sure that adding this to the grid nodes is the best way of achieving that. So options I can see at the moment are:

My feeling is that with the current state of software a GIS data set defining areas of concern would be more useful to most users than flags on nodes. I could imagine a GIS dataset in a standard format (GML, GeoPackage, shapefile, ...) which contains a layer of polygon features defining the extents of the areas for concern, each with a date and description attribute.

This could be very easily visualised and compared users' other data set using GIS queries. While this could be embedded in the deformation model I think that would make it less accessible to users - better that it is published as a downloadable dataset by the producer which is referenced by a URL in the metadata for the deformation model.

ccrook commented 2 years ago

Response from Kevin Kelly 19 Jan 2022:

This is an important issue that should be designed in such a way as to be calculable, extensible and readily incorporated into a coordinate operation or workflow. Using these criteria, I’ve prioritize the proffered options as follows along with my rationale for doing so:

  1. Uncertainty at grid nodes. This is the most rigorous and desirable approach. It can be implemented as an overlay grid to a displacement grid and readily incorporated into a coordinate operation calculation or workflow. It can be applied as an error propagation calculation or operate as a numerical flag to indicate areas where displacements have unusually high uncertainty or where displacements are unknown. It is extensible in that it can be either a sparse grid with a single uncertainty value per node or a fuller, more descriptive grid with a matrix of values per node. Both the DMFM and GGXF are, in their present state of development, capable of handling either of these grid structures.

  2. Flag at grid nodes. This is the next best approach. If the flag is numeric it can be incorporated into some predefined calculation that would indicate the level of confidence (or even an uncertainty) at a grid node. Though less rigorous than real uncertainty value(s) at nodes as in approach 1, it can also be incorporated into a coordinate operation calculation or workflow. If the flag is qualitative, while still useful, it’s calculability potential is somewhat compromised. The flag approach could be extensible with perhaps additional interpretable values at grid nodes, though I’m not sure of the efficacy of this.

  3. Polygonal layers of areas of uncertainty. This is the least favorable approach. While it suits GIS technology and workflows and is useful for visualization, it may not fit well into GNSS RTK/RTN workflows. Moreover, a producer may not have the GIS expertise to produce such a polygon layer. This approach seems the least calculable and it is difficult to envision its extensibility. While it could be used to inform a coordinate operation, neither the current DMFM nor GGXF standards have a structure or the protocols to carry this type of dataset, unless it is restructured into a grid format.

At this stage of our DMFM and GGXF standards development the provision of uncertainty by producers is not a requirement. Since the specification of uncertainty, if it is even available, is at the discretion of the producer, we should provide a simple means for the producer to include these data that complies with current DMFM and GGXF structures and protocols. Our specifications should also strongly encourage producers to provide uncertainty data as this may help to make it “standard operating procedure” for producers to include uncertainties – whether or not the consumers of a GGXF delivered DMFM (e.g. GIS) have the capability to use them or not. The fact that these data are available may drive GIS development to add functionality to use them. The GNSS vendors would likely very quickly utilize these uncertainty data long before the GIS vendors, and this may be another driver for such GIS development.

ccrook commented 2 years ago

Jeff Freymueller responded on 19 Jan 2022:

I think uncertainty, if provided, should be based on whatever the producer’s best estimate is, understanding that the uncertainty propagates into all cells that are adjacent to the node in question. I agree that this is best handled by producing a grid of uncertainties, which applications or users may or may not use (but should use, of course!).

I continue to think that a warning flag is an essential element that should be provided for in the model. Given that we chose to specify a grid instead of discrete fault segments, for faults that cut the surface it is inescapable that interpolation will in some cases be compromised by the discontinuity at the surface. In areas where users are likely to want to try to use the model close to the fault (for example, within an urban/suburban area), a producer can mitigate this by providing a high resolution sub-grid for the most affected area, but no matter what is done there is no escaping the discontinuity. Including a flag at least raises the possibility that users will know that there is some issue and some potential danger. Then they can forge on regardless or decide to treat the model with skepticism, as they think best.

The good news is that I think this really only arises in the coseismic case, with surface or near-surface rupture.

ElmarBrockmann commented 2 years ago

Concerning the disussed "flag" I can understand that it might be essential for a detailed modelling of a fault. On the other side it will hardly be possible to model faults using deformation grids, even with sub grids of denser grid nodes closer to a fault.

Therefore, I think, such a flag would include too much options for the producer as well as the reading software what to do if a flag is set.

ccrook commented 2 years ago

To facilitate discussion here is an example. Vertical right lateral strike slip fault 0 with the top of the model at depth 0 (ie surface rupture) modelled using Okada 1985 elastic half space formulae). The fault has a small change in strike just for interest. Deformation grid representing this event and modelled horizontal displacements are at the grid nodes shown.

image

The deformation is interpolated within the grid cells using bilinear interpolation. The diagram below is a close up of 4 grid cells. The red vectors are the interpolated displacements from the deformation model (ie bilinear interpolation from the grid nodes). The black vectors are calculated from the fault model. These are very well matched in cells which do not included the discontinuity - not so well in cells which do. image

The error of the interpolation (ie the length of the vector difference between the red and black vectors) is contoured across the grid as below: image

One suggestion for identifying areas affected by faulting is to have a flag that identifies affected cells. In this case flagging cells which contain the fault trace would identify the grid cells below (the green outline is the contour of the error of interpolation from above): image

Another option is to highlight areas of concern using the uncertainty. The diagram below is one option for doing this, making the error all cells within half a cell diagonal length of the fault trace much larger than at other nodes (1.1m vs 1m) - these nodes are shown in red below. In this case we are using a single value for horizontal error - a circular probable error. The interpolated error is as shown below:

image

Taking a different approach we could increase the errors to all nodes of the cells that would have been flagged (ie the nodes of the grid cells containing the fault trace). The interpolated error would then look like:

image

Note:

Other considerations for alerting users to discontinuity:

rstanaway commented 2 years ago

@ccrook The example above describes very well the issue of discontinuities in a displacement grid caused by seismic events. Thank you for illustrating the problem so well and discussing the considerations. Complex faulting and surface rupturing are always going to be problematic for high resolution representations of displacement, especially if only grid data is supported (GGXF) to represent the displacement model. Ideally there would be a supplementary string or fault plane segment model to estimate displacements and uncertainties near displaced faults using Okada's model or similar.

If a grid model can accommodate nested grids, the fault trace and proximal displacement could be modelled by increasingly dense nested grids that could almost emulate the the geometry of the surface expression of the fault.

A flag (0 or value greater than 1) could be assigned to a single node (NW corner?) of a grid cell that contains a displaced fault. If the value is 0, then no fault is located within the grid cell (with the exception of a nested grid) and the uncertainties are estimated as defined in the specification. If the value of the NW node is greater than 1, then that number (an uncertainty scaling factor) can be used to scale any uncertainty estimates within that grid cell that contains a displaced fault. The scale factor would indicate the largest interpolation error as illustrated above.

If a denser grid structure is defined with a parent cell to refine the displacement near a fault then the flag would be transferred to what ever child grid cell has a displaced fault.

Not having a flag (and just scaling the uncertainties of all nodes around a grid cell with a fault) has the associated problem of estimating unrealistically large uncertainties in the surrounding cells that may not have displaced faults.

If the use of a flag or artificially scaling uncertainties is not adopted, the grid header could contain a metadata label that warns users of discontinuities but that might be too vague if only 1 grid cell in the model has a fault for example.

rstanaway commented 2 years ago

Reversibility of grid transformations near parent and child grid boundaries

It maybe a good idea to state in the specification that there may be instances where a transformation may not be reversible if a point in the source CRS (within a parent or child grid) is transformed outside the grid to the target CRS. The problem is probably more apparent if there is a very high degree of nesting of the grid structure. In most cases the transformation would be reversible but it's not really true to say that will be the case 100% of the time.

ccrook commented 2 years ago

@rstanaway I have raised your point as a new issue at #47

ccrook commented 2 years ago

Comment from Jeff Freymueller (email 14/2/2022): My summary would be:

So I strongly favor flagging the affected grid cells. Model errors could be large enough to trigger legal property disputes when they exceed the centimeter to decimeter level!

The grid cell is what needs to be flagged, not the node. But our model provides values at the nodes. But if we flag each node adjacent to a cell that contains a discontinuity, then using 3 or more such nodes would be the threshold for a warning that would be relevant to most users (using 1 or two such nodes means both nodes are on the same side). Users concerned about the highest accuracy might want to avoid using any such nodes due to the large deformation gradients that might be present, unless they know something about the grid spacing and the specific case. But this is really up to the user in the end.

ccrook commented 2 years ago

Further email from Jeff:

The attached figure from Hreinsdóttir et al. (2006) may be useful. The two panels show a fault-normal profile of fault-parallel and vertical displacements. The solid and dashed model curves are for a simple 2D dislocation (solid) and the final 3D slip model, respectively. You could imagine sampling these at whatever spacing you like to get a sense of the amount of error that could accumulate. The total surface offset in this case was 5.9 meters, and at this location in soft sediments the better part of 1 meter displacement was spread out over secondary structures over a zone a few hundred meters wide (this is why the two white dots failed to capture the total displacements. To keep interpolation error under control at a few cm close to the fault, you would likely need to go down to a grid spacing on the order of 1 km. This would have to be part of a nested grid (or nested set of grids), as displacements that would matter for land survey extended out maybe 200-300 km from the fault, and detectable displacements out to ~800 km. The uncertainty in the model would certainly vary, so I think including an uncertainty grid is essential. (Just that it is not enough to do only that, in my view.)

Screen Shot 2022-02-14 at 2 58 44 PM

Also a comment in response to suggestion of using a polygon to represent the area of disturbance

I can see the benefit of that. At some level there has to be an “it’s too complicated” exclusion zone around every earthquake, which tells people that if they really want to know, they need to go out and re-measure position. But I think people in the SF Bay area are going to want better than an exclusion zone right in the middle of the urbanized area the next time the Hayward fault has an earthquake.

ccrook commented 2 years ago

Comment from Chris Pearson at 14 Feb meeting.

Trimble support using a polygon to defining areas of disturbance.

ccrook commented 2 years ago

Summary

Requirement.

Support ability to notify user when a transformation is potentially affected by significant unmodelled ground disturbance, in particular a discontinous deformation, such as a fault trace, that cannot be modelled with a smooth interpolated model.

Options

Four options have been proposed to support capability, as describe below. Note that these are not mutually exclusive.

Use a large uncertainty at grid node in the vicinity of affected cells

Pro

Con

Use the grid node displacements

For each cell there are four displacement vectors, one at each node, used to interpolate across the cell. The difference between these is indicative of the variation of displacement across the cell, and hence of the uncertainty. A user could specify their accuracy requirement and be notified.

Pro

Con

Add a flag identifying grid cells in which the model is unreliable

A flag value at grid nodes is used to identify specific cells affectd by disturbance. There are options for implementation, eg flagging the grid node at a specific corner of an affected cell, for flagging all the nodes of an affected cell.

Pro

Con

Use a spatial definition of affected area (eg multipolygon)

The area affected by disturbance is identified by one or more polygons in a GIS data set. Each polygon has attributes identifying the date and magnitude of the disturbance (ie the potential error of the deformation model in the defined area). The data set could include multiple polygons for a disturbance with different magnitudes (ie contours of disturbance).

Pro

Con

Comments

The uncertainty is already defined and supported for the deformation model specification, so it can be used in addition to or instead of any other supported methods.

The spatial definition could be incorporated in metadata directly or as an external resource referenced by metadata (depending on the carrier metadata support),, so is likely to be available by default However this would not provide a standard definition that software could use.

Action required

Decide which options the specification should support. Add documentation to the specification to describe the additional data or algorithms required.

Note: an alternative may be to not include these options, but to indicate that these are potential future extensions.

ccrook commented 2 years ago

This discussion has been added in a "future developments" annex in the abstract specification document. https://docs.google.com/document/d/1EsKJvasc54OgIngdABL263lCNjj6HgpYMtqGvOq0KyQ/edit#heading=h.2yutaiw

Closing this issue.