Closed judax closed 4 years ago
I'm not sure I follow the example.
In this example, I would imagine that the pose of the helicopter would be expressed relative to the nearby platform/landingpad. When it starts to fly away from the platform, being expressed relative to that platform may become inadequate, as small changes in the original anchor may result in big changes to the helicopter (lever arm error amplification). I would fix this now by creating intermediate anchors as it moved, but that seems like a lot of annoying work.
I would like to know how Hololens developers deal with these sorts of issues. All content must be attached to the world, how does MSFT recommend people deal with this there? (I'm blocking on folks github id's, anyone have suggestions?) @trevordev is this something you can comment on (you're at MSFT, right?)
For world-scale scenario, msdn recommends HoloLens dev to always render anchored holograms within 3 meters of their anchor. From https://docs.microsoft.com/en-us/windows/mixed-reality/spatial-anchors :
Spatial anchors stabilize their coordinate system near the anchor's origin. If you render holograms more than about 3 meters from that origin, those holograms may experience noticeable positional errors in proportion to their distance from that origin, due to lever-arm effects. That works if the user stands near the anchor, since the hologram is far away from the user too, meaning the angular error of the distant hologram will be small. However, if the user walks up to that distant hologram, it will then be large in their view, making the lever-arm effects from the faraway anchor origin quite obvious.
Thanks @phu-ms ... what's the recommended approach to dealing with moving content? Create intermediate anchors?
If the object is moving around in world-space then the recommendation is to use a base stationary frame-of-reference. Then when it comes to rest and you want to be able to find it again, then you can create an anchor at that point.
@thetuvix, Alex can probably elaborate more on that.
As @phu-ms points out, we only recommend creating a SpatialAnchor
(the free-space anchor type in Windows MR) when an object is at rest.
The https://docs.microsoft.com/en-us/windows/mixed-reality/spatial-anchors article talks about this:
Render highly dynamic holograms using the stationary frame of reference instead of a spatial anchor
If you have a highly dynamic hologram, such as a character walking around the room, or a floating UI that follows along the wall near the user, it is best to skip spatial anchors and render those holograms directly in the coordinate system provided by the stationary frame of reference (i.e. in Unity, you achieve this by placing holograms directly in world coordinates without a WorldAnchor). Holograms in a stationary coordinate system may experience drift when the user is far from the hologram, but this is less likely to be noticeable for dynamic holograms: either the hologram is constantly moving anyway, or its motion constantly keeps it close to the user, where drift will be minimized.
One interesting case of dynamic holograms is an object that is animating from one anchored coordinate system to another. For example, you might have two castles 10 meters apart, each on their own spatial anchor, with one castle firing a cannonball at the other castle. At the moment the cannonball is fired, you can render it at the appropriate location in the stationary frame of reference, so as to coincide with the cannon in the first castle's anchored coordinate system. It can then follow its trajectory in the stationary frame of reference as it flies 10 meters through the air. As the cannonball reaches the other castle, you may choose to move it into the second castle's anchored coordinate system to allow for physics calculations with that castle's rigid bodies.
For @judax's example in WebXR, a helicopter sitting a platform would have its own anchor. (or perhaps would be at some rigid offset from the platform's anchor) When it takes off, that anchor would be discarded, with the helicopter now being tracked directly in the app's base XRFrameOfReference
. Once the helicopter lands on the new platform, a new anchor would be created for the helicopter. (or it could share the anchor of the new platform)
This works great for HoloLens apps because, while the SpatialStationaryFrameOfReference
attempts to keep its origin stationary over time, it allows the origin to drift as necessary to preferentially stabilize the coordinate system near the user. This isn't a strong enough promise for an object I want to leave somewhere and find again 10 minutes later (for that, you'd want to make a new SpatialAnchor
that stabilizes near its origin and place the object at that origin), but it is a strong enough promise for a moving object that is often near the user anyway.
We explicitly discourage apps from trying to juggle objects between a lattice of anchors themselves - in doing so, the app is basically reimplementing the platform's head tracking, but with less data. Relying on the developer promise of the stationary frame gives apps the best of both worlds. It may behoove us to define an XRFrameOfReference
such as "world"
with that kind of consistent "stabilize near the user" promise across platforms - that would give world-scale WebXR applications a consistent substrate space within which to reason about dynamic objects.
Thanks @thetuvix that's the piece I was wondering about. One big thing to keep in mind is that we're conflating Hololen's Anchor
and FrameOfReference
concepts, as usual, when we talk here, since the ideas behind WebXR anchors are probably closer to FrameOfReference
than Anchor
(we aren't talking about long term stability and possible persistence, we're talking more about the moment-by-money anchoring to the world).
I hadn't done enough digging in Hololens docs (since I haven't done any real programming on it) to know about the SpatialStationaryFrameOfReference. That's a nice pattern, and is similar to something I have found myself doing in some WebXR AR demos and tests
There is a second pattern I also combine this with, which I used during the computer vision testing I did. There, I wanted to do asynchronous processing of video frames, and needed the camera pose expressed relative to a known anchor so that when the vision work was done, I could recover the pose of things I'd computed relative to the camera (i.e., faces, markers, whatever) in the world coordinates. In that case I would use more anchors over time near the camera, since I thought I wanted stronger guarantees of camera pose stability ... not sure if I needed that, though, since even slow CV tends to be "moderately fast" (e.g., multiple per second at worst) and so things don't move or change that much.
I wonder if it's worth having a frame of reference like that in WebXR, independent of anchors. This solves the "should stage be an anchor or coordinate system of some form" issue (on a platform where there is a fixed stage, the stage == SpatialStationaryFrameOfReference), and it greatly simplifies simple AR examples (attach content to SpatialStationaryFrameOfReference if you just "want it in 3D", and reserve anchors for when you really really want content attached to something that you got from a hittest or some other future detection/tracking).
Having this doesn't change the anchor discussion, really: we still need "anchors based on hittest" and "anchors at arbitrary pose" (since any future capabilities like CV will want to create anchors relative to the world based on custom CV).
Closing old issue. Current draft of specification should cover the scenario described. Please reopen this issue or create a new one if you disagree.
This issue was originally part of the Open Questions section in the explainer but I decided to move it to an issue so some specific discussion can happen around it. I think this is related to the concept of trackables that has also been captured in a different issue so if we all feel like this topic is correctly captured in a different issue we should close it. Let me know what you think.
Rationale: For example a helicopter landing on a platform, it may not need to have to be represented by an anchor as the platform could have been represented by one. But when the helicopter starts to fly, it is no longer influenced by the platform so it could need to be represented by an anchor to correctly reflect its pose changes. Then the helicopter could land in a different platform and be attached to it and no longer require an anchor to represent its pose.