Closed ccrook closed 4 years ago
The current third bullet
• The definition also implies a “reference” coordinate system within which coordinates of “fixed” points do not change over time. Commonly this is defined as the location of a point in a measurable coordinate system at a specific epoch. This is not measurable, except at that epoch. The reference coordinate system may be used as if it were a measurable coordinate system, particularly for lower accuracy or local usage.
seems to imply the necessity of always having two CRSs. It is a good practical implementation for NZ, maybe more generally too, but does it have to be mandatory? Does GIA require a second CRS to describe the deformation? Here we may be getting into the weeds of distinguishing between deformation which we choose to describe as a transformation (differences between two CRSs) or by a point motion operation (coordinate change with time within one CRS). Seems to me that the DMFM should be describing such methods of implementation, not something that is in the fundamental definitions.
We have a dichotomy over whether we are talking about the earth as a whole (which is definitely plastic) or bits of the earth that for current practical purposes can be considered to be rigid. The latter can be described far more simply than is required for deforming zones. Conversely, a FM necessary for deforming zones will be over the top, a sledgehammer to crack the nut, for anything that can be described adequately through something as simple as an Euler rotation. I can see this dichotomy coming back time after time. It gets us into the weeds of user accuracy.
seems to imply the necessity of always having two CRSs. It is a good practical implementation for NZ, maybe more generally too, but does it have to be mandatory? Does GIA require a second CRS to describe the deformation?
I've reworded this a bit. I think the deformation model is used either to implement a time dependent coordinate transformation, in which case I suspect it probably will require a reference CRS, or as a point motion model, in which case it doesn't actually need it (and it doesn't make sense to define it). None the less there is still an implicit CRS which is where the coordinate evaluates to 0. A possible exception would be a point motion model which is a simple velocity, in which case it does not need a reference epoch. This is not covered by this definition of deformation model, which explicitly defines the model as calculating a displacement from a reference coordinate. I am not sure there is value in removing this from the working definition.
We have a dichotomy over whether we are talking about the earth as a whole (which is definitely plastic) or bits of the earth that for current practical purposes can be considered to be rigid.
Agreed. I've put in an initial not which hopefully makes that distinction by explicitly excluding the rigid body transformation.
@JATarrio An interesting point. I'm think precision metadata does not cover this requirement. I think what you are suggesting is for very large scale mapping you should use the deformation model to ensure visual alignment of datasets? This is more dependent on the maximum size of displacements predicted by the model, rather than the accuracy/precision of those displacements. So the metadata should indicate the maximum size of displacements (maybe within a "reasonable" timeframe).
This is probably a greater level of detail than is appropriate for the working definition of a deformation model - which is a high level description of the scope of what a deformation model is this context. However once we move on to discussing the functional model itself then it will be good to ensure that this requirement is covered.
I will modify the definition to give more weight to the need for sufficient metadata accompanying the model.
@ccrook thank you very much for the prompt reply.
I mean that the "prediction" of the model must take into account the precision of the target coordinate from the source coordinate. That is, in the case of a velocity model (or trajectory according to some authors) for a GNSS station, in the A reference frame, when applying the model and bringing it to the B reference frame, if for example we obtain coordinates with precision of 5 cm for example; this model or GGXF scale should be used for scales 1: 250 and greater, since if it is used at the detailed engineering level (1:50 and greater) the error of the objective coordinate cannot be modeled, and for this reason it must be left to apply to scales 1: 250 and smaller.
This is an aspect that in countries where the frame of reference is not reviewed with sufficient periodicity(specially in seismic environments), they can use the models that we are proposing for a different target audience, that is, they may end up being used in geodesy, when it only applies to survey and cartography.
That is, for example, in the case of VEMOS17, it declares mm precision, which could lead to its use at the geodetic level, but in places like Chile and West Argentina its indeterminacy reaches cm, having another focus other than geodesic. This is a subject that I have seen in Spain, with the NTv2 datum change grids, where they were used geodesically when the precision was clearly cartographic, this is an aspect that, in my experience, should be reflected, especially by the massive access that GIS provide.
We must not forget that today the PPP positioning (observation epoch e.g. 2020.6) use different velocity models to bring the adjustment epoch of a frame of reference, and at the time that GGXF is standard, that does not imply what to do a PPP (at observation epoch) and applying a GGXF of a certain precision, I can have a geodetic coordinate at the epoch of adjustment with the same precision that the observation epoch had.
I think these are issues that we have to assess for areas with low GNSS station densification and heterogeneous movement. The above is clarified with a clear and objective metadata.
@JATarrio Thanks for the clarification.
Precision is particularly complex as it depends on where and when a transformation is applied. If the deformation is a simple velocity model it isn't too hard to extrapolate a precision model, but as soon as you have deformation events in the model (or even worse, unmodelled deformation events) then it becomes much more complex. So if you are using it to transform between data between two recent dates after any events then it may be accurate to eg 1cm, but if the dates (and location) include a deformation event then the accuracy may worse than say 10cm. I wonder if accuracy is generally something that can be calculated from the model for a specific transformation, rather than something embodied in descriptive metadata.
On a more philosophical note, even where the accuracy is not ideal for geodetic applications, it is still likely to be better than not using a deformation model. Ideally we can keep track of the accuracy implications of whatever processing we are doing on our data.
I suspect we'll have many good discussions on how to deal with uncertainty/accuracy once we progress this work!
@JATarrio @ccrook
My comments here may be over simplistic or miss the point of this thread altogether. First, I agree uncertainty associated with deformation models (or any model) should always accompany the model where possible and should be part of the model itself, not part of metadata. As has been already mentioned, metadata can contain information about suggested application of the model. But a user decision to apply the model or not really depends on the users application and its required accuracy. To me, model uncertainty can help to decide whether I need to apply the model at all. For example, if the model uncertainties are much larger than the model predictions, perhaps I don't need to apply them (of course, "much larger" needs to be quantified). On the other hand, it is hoped that the issuing agency of a model would seriously consider whether to publish a model whose uncertainties far exceed model values. But this last statement needs further consideration, especially in the case of our deformation models which are a function of both position and time. A post-seismic model may yield significant displacements in the "near time" to the event, but eventually its prediction uncertainty may exceed its predictions. How is this information best conveyed to the user, in the model itself or in the metadata?
As to the original point on whether the model is suitable for geodetic or cartographic or other applications, I’d like to refrain from making those sort of judgements or suggestions as part of the deformation model, unless the model does NOT come with uncertainties. To me though, a model without uncertainties should not be published. But if it is, then I agree metadata should convey as much information as possible on the applicability of the model. The decision whether or not to apply model values is up to the user of the model, his application and project requirements; and so far, I cannot envision any better deciding factor for this decision than how model uncertainties compare to model predictions.
@kevinmkelly, @ccrook I agree with you completely on two points. It is not the responsibility of the working group or the final product, its use, or of course its uncertainty. We cannot be responsible for what each agency or similar entity publishes as a model. What I mean, and I think we agree there, is that we must establish sufficient metadata so that the user who uses the GGXF has the technical tools (objective, not subjective) to decide where, when and how to use the product. It is important to keep in mind that the format can (and hopefully be used) in any geospatial setting and that information will be relevant because it implicitly declares its applicability. And without a doubt I agree with @ccrook , no philosophical aspect should be included(if that was understood, I did not mean that, sorry), but the quantity of metadata sufficient to establish how to use it is not the same as a time-dependent model in Europe, as for example in Japan or Chile, where the dependent spatial component of time is very heterogeneous,and therefore cover all the necessary metadata.
@JATarrio @ccrook - Thank you for bringing up these issues and for your insights on them. I think you have set a good framework for the WG to at least begin to grapple with the issue of model applicability and metadata. But we may be getting ahead of ourselves, we are still at the model definition stage!
During the 7 September of the OGC CRS DWG Deformation Model meeting we continued the discussion of the working definition of a deformation model. The discussion included:
The working definition in 5b15af7 was accepted at the 5 October meeting. (Though the comments N7 and N11 have been amended since that meeting).
This issue refers to the current version of the working definition of a deformation model.
The working definition for deformation model provides a precise definition of a deformation model in the context of coordinate operations, which is the scope of this work. A straw man definition was written by Chris Crook for the 13th July meeting of the project team, but unsurprisingly there was barely time to begin discussion of this definition.
If you have comments on this definition please add them to the discussion below.