Closed ccrook closed 2 years ago
@ccrook Re the August 2nd discussion of the UML for a deformation model, my perspective is that a minimalist high-level model is already available from ISO 19111.
In the attached diagram, everything in black is from the ISO model. You could simply rename the CoordinateOperation class as 'Deformation model' and I think it works. However for clarity it might be better to subtype the class, in which case something has to be in the subtype to differentiate it from its parent class. In red I have added a discovery metadata attribute using the ISO citation class as its data type, but it could be any other attribute(s) or restrictions on CoordinateOperation attributes. For the discovery metadata you do not need to name individual attributes from the class (unless we re-write GGXF to not use Citation and instead use explicit discovery metadata attributes).
The other change in red to the right of the diagram is to explicitly show that the 'method' is in two parts (spatial and time function). This is not strictly necessary as the could be part of the method description/formulae. As a rather poor analogy, an oblique cylindrical map projection of the ellipsoid would have three components – projection from ellipsoid to sphere, projection to plane, and rotation to north – but these would all be within the overall description of one method.
Similarly, at the abstract high level, within the UML there is no necessity to give the parameters for each time function. The 19111 data model works perfectly well for describing a Helmert conformal transformation, a Transverse Mercator map projection, and a raft of other conversions and transformations, without any parameters being in the UML. The description of the method should include these parameters.
Of course it would be possible to add detail to the UML model to explicitly include the time functions you have identified and the attributes each should have. An advantage of staying at a high level of abstraction is that the data model is extensible to further time functions not initially identified, in the unlikely event of that being necessary. But details of those already identified will be included in the DM description document and I would have thought that duplication in UML will not significantly help implementation. DefModel_RL2021-08-04.pdf
Thankyou @RogerLott . I very much appreciate your offering a suggestion. I think this doesn't quite work, mainly because there may be multiple components (spatial model + time function) within a single coordinate operation. I am wondering . How would say an NTV2 based datum transfomation fit into this model? If as I suspect it is completely embedded in OperationMethod, then surely so is the deformation model. It is a different operation, and requires a time parameter, but it is just a single operation using a single input coordinate+time. With the BursaWolf option I think it is a concatenated operation. Not sure how something like a geoid model fits here in terms of handling the interpolation coordinate system as well as the source and target system. That could also help me understand how to reflect the deformation model into this framework.
Thanks again Chris
@ccrook I point I was trying to get across was that i think we already have a high level absract data mdel that can handle a deformation model. And indeed the whole deformation model could be considered to be an operation method: DefModel_RL2021-08-04a.pdf
With the BursaWolf option I think it is a concatenated operation.
I am not sure what you are asking here. A 15-parameter time-dependent Helmert transformation is treated as a single coordinate operation. The operation method explains that there are two stages - first using the rates over the relevant epoch difference to calculate values to b added to the 7 parameters, second to apply the 7-parameter transformation.
On the other hand, if seclar motion was described using such a transformation in conjunction with co-seismic and post seismic modelling, I would be inclined to describe this as two seperate coordinate operations associated together as a concatenated operation.
But the ISO data model has flexibility to allow either approach.
Not sure how something like a geoid model fits here in terms of handling the interpolation coordinate system as well as the source and target system
Note the three associations between coordinate operation and CRS. Geoid model is described using the transformation construct, so it has a source CRS which will be geodetic with 3D ellipsoidal coordinate system, target CRS which is 1D vertical with gravity-related heights, and interpolation CRS which with be a 2D CRS (because the grid is constructed in this), most likly the horizontal subset of the source CRS. This structure applies to all transformations. Point motion operations which operate within a CRS require that the target CRS = the source CRS. Not sure tht this is answering what ypu want to have answered.
@RogerLott In terms of Bursa Wolf/15 Parameter model what I was suggesting that if that is part of the deformation model, then the deformation model is a concatenated operation. The reason I think this is that the output from the 15 param transformation becomes the input to the deformation model - they are applied sequentially.
This is different to the application of deformation model components. These are not applied sequentially - they all take the same input coordinate. There was a discussion a while ago (6-9 months maybe). but summarised briefly in the functional model strawman document at https://github.com/opengeospatial/CRS-Deformation-Models/blob/master/functional-model/strawman-cc/functional-model-strawman-cc.adoc#sequential-or-parallel-evaluation-of-components
So I don't believe the separate components are a concatenated operation - the entire deformation model with all its components and time functions is just a single operation method.
I think perhaps the best analogy for the deformation model we are defining is a gridded velocity model (is that already an operation method). Functionally the deformation model we are defining is no different. The parameters inside it are different (it has other time functions and multiple components). But functionally and at at the level of the ISO model it is just a single operation.
So in summary I think the deformation model as we are defining is already described by the ISO model. It is potentially a concatenated operation if it includes a 15 parameter Helmert transformation, but the deformation component is a single operation method.
In terms of a UML model I guess what is still of interest is the internals of the operation method. This could be considered a subclass alongside subclasses for all the other methods that are implemented, such as gridded velocity model. It is the internals of the deformation model that impact on its representation in GGXF and its implementation in software. Outside of this it is just another operation method.
Concatenated transformation operations are definitely necessary to support a deformation model as a transformation operation. A velocity model is frame dependent and is realistically only an intraframe transformation between different epochs of that frame. That in itself is usually sufficient to model interseismic secular motion if the velocity model is truly representative of that. If a reference frame is updated to account for seismic displacements that effectively creates a new version of the reference frame. The transformation between these versions is a grid transformation (or patch as used in NZ). So, thinking of a typical transformation scenario in NZ. A PPP GNSS position is estimated in the IGS14 RF at epoch 2021.4 and is to be transformed to a pre-earthquake version of NZGD2000 (v20130801) to suit a road design in that version. The concatenation would be 1. a 14/15 parameter time-dependent conformal transformation from IGS14 to ITRF96 at epoch 2021.4 2. An intraframe transformation from ITRF96 at epoch 2021.4 to 2000.0 using a NZGD2000 velocity model (which is defined in the ITRF96 RF). 3. Grid transformation from (v20180701) to v20130801 which in itself might be a concatenated operation using different coseismic displacement grids. If by chance the road design was transformed by the civil engineers from v20130801 to v20180701 (again using a sequence of seismic grids) then step 3 is not required. I'm not sure how this relates to the UML concept but it is going to be a very typical transformation sequence now and in the future.
Concatenated transformation operations are definitely necessary to support a deformation model as a transformation operation.
@rstanaway - I don't agree! Or more specifically, I don't agree from the point of view of coordinate operations. My point of view is follows - so prefix everything below with "I think that ...".
Velocity, post-seismic, co-seismic are all part of the total deformation model. If we are looking at point motion models (ie intraframe models) then they are all part of it, and the are all used to define the trajectory of a point. There may be situations where you want to consider them separately, mainly from the point of view of developing models - for example to work out what the coseismic movement is independently of the secular velocity. Also you might want to deal with the velocity model in isolation if another mechanism has dealt with other deformation. For example if you have updated you coordinates for coseismic deformation.
The significant difference between the use of deformation components and a concatenated coordinate operation (at least if I understand the concatenation properly) is that for a concatenated operation the output coordinates of one step of the concatenated operation become the input coordinates of the next step whereas for the deformation model calculation the same input interpolation coordinate is used for all components to calculate the total displacement which is then applied to the source coordinate. This is the conclusion of the discussion at https://github.com/opengeospatial/CRS-Deformation-Models/blob/master/functional-model/strawman-cc/functional-model-strawman-cc.adoc#sequential-or-parallel-evaluation-of-components. So the deformation calculation is not a concatenated operation.
By contrast I totally agree that is a concatenated operation if we include the 14/15 parameter Helmert/Bursa Wolf (I must find out the correct term for this - I am using it very loosely at the moment)
In terms of your typical example above the transformations would be slightly different to what you have described in that that step 2 is not "An intraframe transformation from ITRF96 at epoch 2021.4 to 2000.0 using a NZGD2000 velocity model" - it is a deformation model transformation from ITRF96 to ITRF20180701 at epoch of observation. The first step (ITRFxxx to ITRF96 is a concatenated operation of course).
We can't use the 20130801 deformation model at observation epoch we want to relate the locations on the 20130801 plan to current physical location as it doesn't correctly represent the trajectory of points affected by the deformation events since it was built. If the plan has not been updated to NZGD2000(20180701) then we need to convert the coordinates from 20180701 to 20130801, but we need to do this at an epoch before which the two models include the same events. The epoch could be 2000.0, but equally at any epoch before the events added for the 20180701 model. Note that this is not a null transformation as it will apply the effects of reverse patches introduced in 20180701. This is a concatenated operation of NZGD2000(20180701)->ITRF96 + ITRF96->NZGD2000(20130801).
Note that in the past LINZ has supplied NTv2 transformation grids to convert 20130801 to 20180701 (or vice versa) GIS software has not had the capability and data to use the deformation models to do this or to recognize different versions of NZGD2000. Once that capability is there then this can just be done using the built in coordinate transformation operations of the GIS.
From what I understand the NZGD2000 deformation model is applied using a single function (using all components of the deformation model), so no problem there. My only concern is if bits of it are picked out for specific use cases. Are the transformation grids between different versions referenced from the original realisation, or are they incremental displacements from the previous version? I'm really keen to adopt the NZ approach for Papua New Guinea using the DMFM schema and GGXF. The big difference with PNG is the sparsity of the CORS network. Passive geodetic monuments are even sparse in many remote areas so a PPP/AusPOS IGSyyy@epoch to local approach is really important (primarily using the velocity model part) and in areas of earthquake displacement there are often two local versions of the local datum used in parallel. One pre-earthquake (used by non-geodetic spatial databases, survey CAD files and engineering designs) and others post-earthquake (e.g. used to control Lidar and imagery). Transformation between these is best done using the seismic displacement grids and the velocity part doesn't come into it. Perhaps I'm guilty of looking at this too much from an end user's perspective!
@rstanaway Different versions of NZGD2000 are different versions of the deformation model. How these relate to each either in terms of common components depends on implementation.
I think when we implement the GGXF version then each version will have its own unique, complete GGXF file eg 20180701 will be independent of the 20130801 version. Obviously a lot of the data used to compile the GGXF files will be the same for the two versions
For the PROJ JSON+GeoTIFF format the JSON file for each is separate, but the share some common GeoTIFF nested grid files. So the 20180701 model uses all the grids used in 20130801, plus adds some more.
For the original LINZ published CSV file the set of CSV files defines all versions of the model. When there is a new version of NZG2000 some CSV files are added (for grids) and some are amended to define how the new grids are used for each version.
Transformation between versions can be done without taking account of which grids are in each version - it can be done using the entire model for each version as a single function. From a calculation efficiency point of view it can be more efficient to look at individual components for transforming between versions and to just use the components that are unique to one or other version, but it is not necessary.
Perhaps I'm guilty of looking at this too much from an end user's perspective!
That is definitely where we should be looking at it from. Unfortunately it is always not simple as you well know! There are multiple scenarios and multiple ways of dealing with them. I think that the purpose of our deformation model work is to facilitate the implementation of deformation models (particularly national deformation models) into users' software so they don't have to deal with them using specialist tools and workflows that are unique to each jurisdiction. But unfortunately it won't insulate them from having to understand the impact of deformation at some level.
Closing as currently the UML model is not included in the specification. Should it be? The discussion above re the 14 param transformation is referenced from #23
I have renamed this issue from "2 August meeting topics" to "UML model in relation to ISO model" as that is the actual subject of discussion below.
This relates to the UML model for the deformation model that I put forward to support my implementing it into GGXF and determining what attributes it required to do that. This caused some consternation and a lot of discussion in the August meeting as it unintentionally treads on the toes (possibly even the body) of the ISO coordinate operation model.