opengeospatial / GeoPose

OGC GeoPose development.
Apache License 2.0
41 stars 16 forks source link

Feedback about Frames of Reference from OGC I3S editor (TamB) #41

Closed cperey closed 2 years ago

cperey commented 2 years ago

From: Tamrat Belayneh tbelayneh@esri.com Subject: RE: [DEADLINE: Nov 22 2021] Request for Input and Feedback about your standard in the GeoPose Reviewers Guide Date: November 22, 2021 at 11:22:28 AM PST To: Christine Perey cperey@perey.com

Hi Christine,

OGC’s 3D streaming formats, both I3S and 3D Tiles deal with all forms of representation for orientation of objects/scenes. In particular, I3S evolved from a 3d representation system that began with supporting only Yaw roll and pitch for object orientation (in earlier days I think we might even have had Euler Angles…) – so we could tick all :) nowadays (the last 8+yrs) our bounding volumes are oriented using Quaternions in I3S etc.. So, I’d say Quaternions are the more agreed/accepted form of object orientation representation in the 3d streaming services standards.

Furthermore, In the I3S world we do rely both on Geospatially-anchored objects for our ‘pose’ and/or allow for internally-defined (local) Frame of Reference – so we occupy both ends -

In fact, we have found it mandatory to explicitly declare the ‘Frame of Reference’ in our I3S standards b\c without that it was difficult to figure object orientation for the implementers – so yes, I recommend explicitly requiring/mandating the frame of reference upfront (in I3S it is declared as part of the ‘layer metadata’).

I think it is extremely helpful to – define a standard – when using geospatial content (that are typically defined in one of those frames of references mentioned above) within a virtual reality context – An example of this use case is where we support the usage of I3S 3D basemaps (3D meshes, 3D Objects, Point cloud etc..) in game engines (marrying geospatial content with game engine realism (an example of this app/environment is the ArcGIS Maps SDK for game engines which is in public beta) in which what you call “Geospatially Anchored Virtual Reality” applies. (More here)

… The GeoPose standard consists of an implementation-neutral Logical Model. This model establishes the structure and relationships between GeoPose components and also between GeoPoses data objects themselves in composite structures. Classes and properties of the Logical Model that are expressed in concrete data objects are identified as implementation-neutral Structural Data Units (SDUs), aliases for elements of the Logical Model. SDUs are grouped to define the implementation neutral form of the GeoPose Standardization Targets: the specific implementation that the Standard addresses. For each Standardization Target, each implementation technology will have the definition of the encoding or serialization specified in a manner appropriate to that technology. <

I haven’t studied it enough to see what the benefits of having this standardized means; but we could definitely benefit from having it defined by a standards body (at least in the context of 3d – so we don’t to have resort creating our own version)..

I just noticed though, this (what is stated below) is exactly the space we occupy – we define reference frames (ECEF or local-reference frame etc..) and generate and consume content accordingly….

The GeoPose 1.0 Standard excludes assumptions about the interpretation of external specifications for example, of reference frames. Nor does it assume or constrain services or interfaces providing conversion between GeoPoses of difference types or relying on different external reference frame definitions. <

(PS in the official document below, it has been slightly edited but please not the typo below):

< The GeoPose 1.0 Standard excludes assumptions about the interpretation of external specifications fsuch (typo) as reference frames. Further, the Standard does not assume or constrain services or interfaces providing conversion between GeoPoses of difference types or relying on different external reference frame definitions.

This slide (from the original video that was shared when I sent my review internally) – appeals to me quite a lot :- “what is meant by height/elevation ? “

We are facing this quite a lot nowadays where content is now being captured in various frames (heights above ellipsoid, ENUs or orthometric etc…) – there has been many times where a data is captured in one system and (the content acquisition system and/or creator) hasn’t paid attention to the height unit and discrepancies crop up.. (data floating over our basemaps and/or vice versa)…

And for us the simplest use case of it all could be a good starting point (I’m talking about here in this example belwo: You want to capture a view while visualizing in augmented reality (say in our ArcGIS Maps SDK for game engines) and want to share that across all our apps – the view can be captured based on this simple definition you have: This basic notation is also very similar to how we capture Bounding Volumes in I3S – so again, an area of conformity with existing OGC standards such as I3S. This is also a good way that appeals for geospatial world where you show how you can capture this augmented reality view and share it -unequivocally : In summary, all I have is positive review and would recommend the workgroup to incorporate more of this real-world use cases which we’ll be glad to collaborate with. Hope this helps, Regards _tam
cperey commented 2 years ago

There are some image files that are not in this issue but in e-mail. They can be reviewed separately when we take up these comments.

3DXScape commented 2 years ago

Typo noted was corrected.