Closed ccrook closed 2 years ago
This compares two options for representing piecewise linear time functions in the deformation model function and more particularly in the GGXF group header that will implement the function with a view to choosing which best suits as an implementation approach.
Piecewise functions are desired as an option the deformation model because 1) they can approximate arbitrarily complex time functions, and 2) they are used in the LINZ deformation so required to implement that.
However when implementing into GGXF headers it raised (for me) a question of whether a simpler structure could suffice using ramp functions ...
In a YAML style GGXF header this would look like:
timeFunction:
baseFunctions:
- type: piecewise
epochMultipliers:
- epoch: 2013-05-08T00:00:00Z
multiplier: 0.0
- epoch: 2013-05-08T00:00:00Z
multiplier: 0.9
- epoch: 2014-02-05T00:00:00Z
multiplier: 1.2
- epoch: 2015-01-01T00:00:00Z
multiplier: 1.4
Pro:
Con:
A ramp function is a simple function with four parameters, startEpoch, startMultiplier, endEpoch, endMultplier
timeFunction:
baseFunctions:
- type: ramp # red function
startEpoch: 2013-05-08T00:00:00Z
startMultiplier: 0.0
endEpoch: 2013-05-08T00:00:00Z
endMultiplier: 0.9
- type: ramp # blue function
startEpoch: 2013-05-08T00:00:00Z
startMultiplier: 0.0
endEpoch: 2014-02-05T00:00:00Z
endMultiplier: 0.3
- type: ramp # green function
startEpoch: 2014-02-05T00:00:00Z
startMultiplier: 0.0
endEpoch: 2015-01-01T00:00:00Z
endMultiplier: 0.2
Pro:
Con:
Although GGXF implementation should not drive functional model definition the simple ramp function is preferred (after much vacillation)
There was a slight preference for the simpler base function. However the name "ramp function" generally did not resonate (and subsequent investigation shows it is different from the conventional mathematical ramp function.
The functional model will include a function type called piecewise linear function and defined just by a start and end epoch and multiplier. (YAML example in the header updated to reflect this).
The
The YAML structure must support multiple time functions components. My original proposal was
groups:
# Start of a GGXF group corresponding to a component in the deformation model
- remark: Deformation from 2 Feb 2013 earthquake
timeFunction:
# minEpoch and maxEpoch define the temporal extent within which the
minEpoch: 2013-02-02T11:58:00
# maxEpoch: 2019-12-31T00:00:00
# The time function is defined by adding one or more base functions
baseFunctions:
- type: step
referenceEpoch: 2013-02-02T11:58:00
grids:
- remark: horizontal coseismic displacement
....
However Roger Lott has pointed out that the minEpoch and maxEpoch items are redundant as they can be calculated from the time function. The intention of having them for computational efficiency. When calculating the model the group can be ignored if the time is outside this temporal extent). If they are omitted the time function must be calculated at the calculation point and then ignored if it evaluates to 0.0. Generally this will be at small computational cost, particularly in the common case where the deformation is evaluated many times using the same calculation epoch (eg transforming a data set from one epoch to another).
If these are omitted then baseFunctions element is not required to identify the list of base time functions in the data structure. Instead the timeFunction element can be an array itself, so that the overall structure is somewhat simpler:
groups:
# Start of a GGXF group corresponding to a component in the deformation model
- remark: Deformation from 2 Feb 2013 earthquake
timeFunction:
- type: step
referenceEpoch: 2013-02-02T11:58:00
grids:
- remark: horizontal coseismic displacement
....
Do we agree with this simpler structure?
Pros:
Cons:
Recommendation:
?
Copied from @desruisseaux https://github.com/opengeospatial/CRS-Gridded-Geodetic-data-eXchange-Format/issues/10#issuecomment-874582517
About the time function, a possible approach would be to decide that the timeFunction value is always an array of functions. There would be no need for explicit "piecewise" function since the array would be the piecewise function. The array length would be 1 in the simple cases. Taking the example at the beginning of this issue it could be:
deformationModel
- component
spatialModel:
...
timeFunction:
- functionType: linear
startEpoch: 2015-05-02
endEpoch: 2016-05-02
multiplier: 1.5
- functionType: exponential
startEpoch: 2016-05-02
endEpoch: 2017-05-02
referenceEpoch: 2010-03-04
decayRate: 0.8
amplitude: 1.3
- ...
- component
...
The array elements would be in increasing order of startEpoch. We could possibly remove the endEpoch if we specify that the end epoch of an element is implicitly the start epoch of the next element (it would avoid to worry about possible overlaps or holes in the temporal range of elements).
@desruisseaux This is more or less the same as the ramp function described above. (But note following the meeting I am calling this piecewise linear rather than ramp). In the above case the ramps all start at 0 but that is not necessarily the case - in a "reverse patch" it would start with a non-zero value and end at 0.0. Also the base functions are independent and all get evaluated and summed. Each base function may have start and end values, but these are not exclusive. It is done this way to support more complex cases such as exponential+logarithmic which can be used in post-earthquake time evolution.
In the example above this does make using multiple ramp=piecewise linear functions a bit cumbersome compared to the original piecewise linear proposal, but I think this is a worthwhile trade off for a simpler structure, especially as at the moment the piecewise linear function is very little used in practice (at least so far).
Updated yaml in opening comment to reflect naming proposed for GGXF
Also the base functions are independent and all get evaluated and summed.
I thought that only one base function would be evaluated, the one for which a time t is between startEpoch
and endEpoch
. But the "ramp" proposal is that all functions would be evaluated (unconditionally?) and summed?
@desruisseaux I had never intended that the base functions are selected from. I may need to emphasize that more clearly in the functional model document!
It was always intended that they are all used, for example a post-seismic movement could be modelled by adding together an exponential and a logarithmic function. rather than all used.
In practice I doubt there will be many cases where more than a couple would be used for the same spatial model but there seems no reason to unnecessarily restrict producers generating models. I'd rather leave it with a sufficient set of time function building blocks to suit any likely future requirement.
Changes from this issue have been implemented into the abstract specification document. Closing on that basis.
The currently proposed time function is documented in the [functional model strawman](https://github.com/opengeospatial/CRS-Deformation-Models/blob/master/functional-model/strawman-cc/functional-model-strawman-cc.adoc#formula-time-function is a composite function calculated by adding one or more base components, each one of:
This provides can support the most likely functions that we expect to be used in deformation models, based on geophysical deformation models and GNSS reference station models. Both of these are generally based on trajectories of individual points. The piecewise function adds a capability to approximate model arbitrarily complex functions.
To progress GGXF in a way that supports Deformation Models we need to identify what is required in it to do that. The deformation model GGXF headers are mostly generic - little different to (for example) a datum transformation grid. However the time function is structurally relatively complex and so we want to offer GGXF a likely draft header.
An (unrealistic) example model to demonstrate each type is:
I would like to have a discussion to gain team consensus on the preferred model to finalize its description in GGXF.