MirageXR is a reference implementation of an XR training system. MirageXR enables experts and learners to share experience via XR and wearables using ghost tracks, realtime feedback, and anchored instruction.
Currently, models are simply displayed. But what about if the users could move them, drop them, collide them with other objects? Play animations upon interaction? Tap into glTF's behaviour graphs functionality? Labels?
Currently, models are simply displayed. But what about if the users could move them, drop them, collide them with other objects? Play animations upon interaction? Tap into glTF's behaviour graphs functionality? Labels?