Toni-SM / semu.xr.openxr

OpenXR compact binding for creating extended reality applications on NVIDIA Omniverse
MIT License
25 stars 6 forks source link
ar linux nvidia-omniverse omniverse-kit-extension vr xr

OpenXR compact binding for creating extended reality applications on NVIDIA Omniverse

This extension provides a compact python binding (on top of the open standard OpenXR for augmented reality (AR) and virtual reality (VR)) to create extended reality applications taking advantage of NVIDIA Omniverse rendering capabilities. In addition to updating views (e.g., head-mounted display), it enables subscription to any input event (e.g., controller buttons and triggers) and execution of output actions (e.g., haptic vibration) through a simple and efficient API for accessing conformant devices such as HTC Vive, Oculus and others...


Target applications: Any NVIDIA Omniverse app with the omni.syntheticdata extension installed (e.g., Isaac Sim, Code, etc.)

Supported OS: Linux

Changelog: CHANGELOG.md

Table of Contents:


showcase


Extension setup

  1. Add the extension using the Extension Manager or by following the steps in Extension Search Paths

    • Git url (git+https) as extension search path

      git+https://github.com/Toni-SM/semu.xr.openxr.git?branch=main&dir=exts
    • Compressed (.zip) file for import

      semu.xr.openxr.zip

  2. Enable the extension using the Extension Manager or by following the steps in Extension Enabling/Disabling

  3. Import the extension into any python code and use it...

    from semu.xr.openxr import _openxr
  4. Or use the GUI launcher to directly dislpay the current stage in the HMD


Diagrams

High-level overview of extension usage, including the order of function calls, callbacks and the action and rendering loop

Typical OpenXR application showing the grouping of the standard functions under the compact binding provided by the extension (adapted from openxr-10-reference-guide.pdf)

openxr-application


Sample code

The following sample code shows a typical workflow that configures and renders on a stereo headset the view generated in an Omniverse application. It configures and subscribes two input actions to the left controller to 1) mirror on a simulated sphere the pose of the controller and 2) change the dimensions of the sphere based on the position of the trigger. In addition, an output action, a haptic vibration, is configured and executed when the controller trigger reaches its maximum position

A short video, after the code, shows a test of the OpenXR application from the Script Editor using an HTC Vive Pro

import omni
from pxr import UsdGeom
from semu.xr.openxr import _openxr

# get stage unit
stage = omni.usd.get_context().get_stage()
meters_per_unit = UsdGeom.GetStageMetersPerUnit(stage)

# create a sphere (1 centimeter radius) to mirror the controller's pose
sphere_prim = omni.usd.get_context().get_stage().DefinePrim("/sphere", "Sphere")
sphere_prim.GetAttribute("radius").Set(0.01 / meters_per_unit)

# acquire interface
xr = _openxr.acquire_openxr_interface()

# setup OpenXR application using default parameters
xr.init()
xr.create_instance()
xr.get_system()

# action callback
def on_action_event(path, value):
    # process controller's trigger
    if path == "/user/hand/left/input/trigger/value":
      # modify the sphere's radius (from 1 to 10 centimeters) according to the controller's trigger position
      sphere_prim.GetAttribute("radius").Set((value * 9 + 1) * 0.01 / meters_per_unit)
      # apply haptic vibration when the controller's trigger is fully depressed
      if value == 1:
        xr.apply_haptic_feedback("/user/hand/left/output/haptic", {"duration": _openxr.XR_MIN_HAPTIC_DURATION})
    # mirror the controller's pose on the sphere (cartesian position and rotation as quaternion)
    elif path == "/user/hand/left/input/grip/pose":
        xr.teleport_prim(sphere_prim, value[0], value[1])

# subscribe controller actions (haptic actions don't require callbacks) 
xr.subscribe_action_event("/user/hand/left/input/grip/pose", callback=on_action_event, reference_space=_openxr.XR_REFERENCE_SPACE_TYPE_LOCAL)
xr.subscribe_action_event("/user/hand/left/input/trigger/value", callback=on_action_event)
xr.subscribe_action_event("/user/hand/left/output/haptic")

# create session and define interaction profiles
xr.create_session()

# setup cameras and viewports and prepare rendering using the internal callback
xr.set_meters_per_unit(meters_per_unit)
xr.setup_stereo_view()
xr.set_frame_transformations(flip=0)
xr.set_stereo_rectification(y=0.05)

# execute action and rendering loop on each simulation step
def on_simulation_step(step):
    if xr.poll_events() and xr.is_session_running():
        xr.poll_actions()
        xr.render_views(_openxr.XR_REFERENCE_SPACE_TYPE_LOCAL)

physx_subs = omni.physx.get_physx_interface().subscribe_physics_step_events(on_simulation_step)

Watch the sample video


GUI launcher

The extension also provides a graphical user interface that helps to launch a partially configurable OpenXR application form a window. This interface is located in the Add-ons > OpenXR UI menu

The first four options (Graphics API, Form factor, Blend mode, View configuration type) cannot be modified once the OpenXR application is running. They are used to create and configure the OpenXR instance, system and session

The other options (under the central separator) can be modified while the application is running. They help to modify the pose of the reference system, or to perform transformations on the images to be rendered, for example.


Extension API

Acquiring extension interface

API

The following functions are provided on the OpenXR interface:

Available enumerations

Available constants