microsoft / MixedReality-WorldLockingTools-Unity

Unity tools to provide a stable coordinate system anchored to the physical world.
https://microsoft.github.io/MixedReality-WorldLockingTools-Unity/README.html
MIT License
188 stars 45 forks source link

Spatial Alignment Between Hololens 2 Headsets using Spacepins #304

Open jdcast opened 1 year ago

jdcast commented 1 year ago

I hope this is the right place to post these questions and if it's not I'm happy to move them to a more appropriate venue. I'm new to unity/Hololens development in general.

I'm trying to build an application that allows multiple Hololens 2 headsets (i.e. multiple users) to see/place (in realtime) holograms (primarily on walls) using a shared coordinate space. I'm fairly certain this is a "solved" problem and my current understanding is that WLT + ASA are the modern tools that should be able to accomplish this.

I've been getting my self up-to-speed on WLT through the main resources archive on it as well as these discussions:

  1. https://github.com/microsoft/MixedReality-WorldLockingTools-Unity/issues/283
  2. https://github.com/microsoft/MixedReality-WorldLockingTools-Unity/issues/281#issuecomment-1102582622
  3. https://github.com/microsoft/MixedReality-WorldLockingTools-Unity/issues/264
  4. https://github.com/microsoft/MixedReality-WorldLockingTools-Unity/issues/261

In my hypothetical application, users will place holograms on the spatial mesh (primarily a wall) and be able to see their manipulation in real-time using PUN2 in a generally confined space of dimensions ~10'x10' on the floor and ~10'x30' on the wall. From the resources above, I understand that multiple spacepins should allow for higher location accuracy. Because of the floor+wall "workspace" arrangement of the application, I figured it's best to provide some number of spacepins corresponding to both a floor and wall that the first user in a gameroom would pin to give a common coordinate space for other users later joining the application. To this end, I've been playing with the pinTestSofa example and simply modified the furniture to be a floor plane with 4 spacepins and a wall plane with 2 spacepins (I'm not certain of the numbers needed for each). See image below for the scene. My assumption is that the floor pins would maintain better accuracy in the horizontal plane and the wall pins would help maintain better accuracy in the vertical dimension along the wall.

image

The idea would be that maybe something very similar to this example would be a first scene in the application that the first user would see, and then all other users might skip to the main application (a second scene) that polls for the updated coordinate space through ASA.

My questions:

  1. When I try modifying the pinTestSofa scene as described above, things seem to work as I would expect when until I start moving more than two pins. I've noticed that when trying to position a third, fourth, etc spacepin, the spacepin often simply won't move to the desired location. Additionally, the framerate drops to ~30fps with only two placed pins and with 4 placed pins, it moves to ~10fps. Finally, I sometimes see the furniture drifts as I move about after having tried to place more than 2 or 3 of the 6 spacepins in the scene. Is any of this expected behavior?
  2. Is there a better method and/or geometry to use for placement of the spacepins to insure high accuracy of holograms placed on the wall?
  3. I'd like to be able to store the relative positions of the placed objects w.r.t. themselves and be able to re-instantiate that layout later in another session of the application for further editing (i.e. removal/addition/positioning of holograms in the arrangement). Is this possible and if so, what might be an approach to allow for the configuration to be remembered but not "anchored" to physical space? I'm assuming that parenting the placed objects and perhaps having a spacepin corresponding to that parent object that a user can place to anchor the arrangement in a new scene for further editing would be a start?

Thanks!

genereddick commented 1 year ago

If @fast-slow-still is around he can answer everything better than I, but I'll add what I have learned (sort of), over the past few months.

  1. I'm not sure about the framerate drops, but we see a lot of drift when either a) the virtual poses (in Unity) and the coordinates of the space pins are not accurate, say you are trying to pin the edges of desk that is 3 meters wide, but your holographic model is only 2.5 meters wide. This is even more dramatic if the rotations are not matched. Generally, we have had success by setting no rotations (just the identity pose) on the virtual positions we want to pin, and then using the spacepinorientables to figure out WLT rotation on its own, or b) when we have accidentally pinned a bunch of virtual positions that are the same (say all 0,0,0) or we changed those virtual positions but didn't Reset the spacepins (so they are trying to match the wrong virtual position with the pinned location).
  2. Not sure.
  3. The way we handle this is by using image recognition to get the locations of various positions (you could use ray hits, or just placing objects around). We get those world positions and then their relative positions in relationship to the world origin (we set one of those positions as a virtual pose.identity. We save that to disk (and to the room in photon). When we restart and reload those markers, they have the same relative positions to each other. We can then pin them -- after 2-3 they have all rotated into the same orientation as when originally created.