WEKIT-ECS / MIRAGE-XR

MirageXR is a reference implementation of an XR training system. MirageXR enables experts and learners to share experience via XR and wearables using ghost tracks, realtime feedback, and anchored instruction.
Other
28 stars 4 forks source link

MRTk or ARfoundation + openXR? #2042

Open fwild opened 1 month ago

fwild commented 1 month ago

Fridolin Wild: The more I look into the Quest 3 setup, the more I believe we should probably retire MRTk and use ARfoundation only. Do you happen to know if ARfoundation covers all our needs (it does not cover image targets on hololens, but we have that already via vuforia)?

Benedikt Hensen:

Yes, I would say that we are not using anything that is completely specific to the MRTK that couldn't be replaced by other means. The ARFoundation is made for smartphone AR, so the Android and iOS versions should always work. The question is whether all HoloLens features and sensors are still accessible without it.

The following aspects in the project would need to be replaced. All seem possible, although some of them require a bit of work, especialy the last two points where we need to double-check whether the HoloLens-version would work with them.

Concluding, it is likely possible to remove the MRTK but it could be a medium-sized project to do so. We potentially lose a bit of comfort in development and might need to re-implement some foundational features that just existed out of the box in the past like the object movement and cross-platform adjustment of input handling. It is a pity that the MRTK acts like a core engine extension with its own architecture and required scene setup instead of it being an optional auxiliary library - otherwise, we could switch to ARFoundation and keep the helpful scripts of the MRTK but since it hooks into the input system and all around it, that won't work.

An alternative might also be to look into upgrading to the MRTK 3. It is somehow flying under the radar since it was moved into a different repository but in contrast to the original MRTK, this one is steadily getting new commits and doesn't seem dead at all. Releases might be a bit more rare than in the past but they seem to have already passed the experimental stage and reached stable release numbers. https://github.com/MixedRealityToolkit/MixedRealityToolkit-Unity

It seems like it is already more geared towards the XR structure of ARFoundation and lists the Quest 1 and 2 as supported platforms - I also saw some videos online of it working on the Quest 3, so that is probably possible. However, upgrading to MRTK 3 is quite substantial as they now rely on a more modular, completely new architecture with lots of little feature packages that have basically nothing in common with the MRTK 2. So, the switch to MRTK 3 also requires us to work on the above points and instead, insert a MRTK equivalent. https://learn.microsoft.com/de-de/windows/mixed-reality/mrtk-unity/mrtk3-overview/architecture/mrtk-v2-to-v3 (edited)

fwild commented 1 month ago

Re Quick testing and in-editor simulation: yes, that's what the XR simulation feature seems to do - though it seems less nice to use in comparison. It seems to promise quite similar functionality (and then some, minus the preloading of a spatial map).

fwild commented 1 month ago

Re Camera access for taking pictures: as far as I can see it uses the native camera interface for windows, not something from MRtk.

fwild commented 1 month ago

Re MRTk cross-platform adjustments: I think this is what the XR interaction management package does for openXR - looks like it can be configured to handle multimodal input, and differently on different platforms.

fwild commented 1 month ago

Another alternative to this is to move most of the core functionality into a separate package, and then use single platform build projects instead. This has charme, too, but development might get more complicated, as we would need to release the package first (or import+pull it from a git repo)?

fwild commented 1 month ago

Benedikt Hensen

Yes, a core package would be the cleanest approach. It would allow us to finetune the version for each platform on its own to get the optimal design for each device.

The automatic cross-platform migration always has the disadvantage that the usage experience somehow feels slightly off on devices that were not the main development target as it then uses functions that were not mainly meant for that device and, e.g., do not utilize all its capabilities. And putting everything in one repository also makes the project quite complex and bloated.

The main task for creating a core package would be to identify what logic is platform-independent. It probably also requires adjustments to the current architecture in a way that the underlying logic becomes fully platform-independent. We have a chance to go into that direction when we add the new data model. If we properly separate it from the UI parts, a large part of this separation is already achieved. At the moment, the loading routine sets up all GameObjects once and distributes information onto them. If changes are made in the scene, we keep the data in sync by applying the changes to it at the same time. Instead, a nicer model would be to have a single source of truth in the data and connect the GameObjects to it via events. So, the GameObjects can adjust by themselves if the data change.

Once we have a suitable architecture, the technical part of setting up a package for it is actually quite straightforward with Unity's packaging system. The core package can even be edited from a project that uses it if the package is imported from the local disk (e.g. by cloning it first and then connecting it to all the platform-specific projects). But the approach requires the developers to be careful in what they change as it could break all the other projects, so automated testing, etc. could be vital for that setup.

fwild commented 1 month ago

The idea that we manipulate the gameobjects by changing data in the data model is very appealing - with the data change reverse-triggering events that update the gameobject. Not quite sure whether this works everywhere, though. For example with the ghost tracks.

fwild commented 1 month ago
  1. The services attached to the Root script+gameobject (and the i5 services, finally unifying how we use them) should quite nicely fit into a mirageXR service package (containing things like floor manager, AI manager, brand manager, calibration manager, exception manager)
  2. The UI kit as well as the spatial UI kit
  3. The content management (activity logic, augmentations, task station, search etc. - the activitySelection even is already encapsulated into its own scene).

I think that’s pretty much it.