iTowns / itowns

A Three.js-based framework written in Javascript/WebGL for visualizing 3D geospatial data
http://www.itowns-project.org
Other
1.08k stars 293 forks source link

[Proposal - WebXR controllers] Add WebXR controllers interaction #2229

Open jogarnier opened 8 months ago

jogarnier commented 8 months ago

This issue is a feature proposal. Feel free to upvote (with :+1: ), comment and provide your use-cases if you're interested by this feature.

Context

(Following the TODOs of https://github.com/iTowns/itowns/pull/2092)

image

Expected behavior

Proposal

Add a demo implementation of WebXR controllers. The perimeter of this example is exclusive to VR and its bounds are yet to define (see problems part).

Defining Itowns controllers API

Registering and using WebXR controllers can be factored to let Itown create and bind technically the instances. There are 2 types of controllers :

Tests and PoC has been done on the latter one. Controller binding is not guaranteed to be consistant from a model to another. In this context, it has been mainly tested on a Pico4.

Add more WebXR parameters at instance creation

Listening controllers modification

The exhaustive list of events that can be listened from controller :

For this demo purpose, I added a few more that are fired by itowns WebXR logic and listened by the user example :

TODO test binding event squeeze

Interactive behaviors are not provided by default, the user has to write its owns.

The example separate two layers of binding :

Potential Problems

  1. Controller position tracking precision, due to the usage of getOffsetReferenceSpace() method to teleport the user in the xr context.
    • positions are highly rounded which is the cause of a staircase effect when moving the controllers
  2. XR terrain intersection via buffer reading gives wrong distance result using c3DEngine.depthBufferRGBAValueToOrthoZ() method.
    • I assume it is due to the camera array provided for this computation
  3. Controllers binding differs from a model to another, there is no clear way to identify by code which model is used yet. So should we define basic interactions only for the common API provided ? (https://www.w3.org/TR/webxr/#event-types)

Potential Solutions

  1. Fact : Precision issue does not occurs in a classical scene (near the origin). -- One track followed was to take advantage of the parent group object containing the VR helmet cameras + controllers. This parent would be holding the world positioning while the XRReferenceSpace would be left to the 3D origin coordinates. One struggle met is that Itowns doesn't let tricks itself so easily. -- Another track was to separate the VR camera array positions from the Itowns camera and synchronize both positions of the parent group previously set up. The goal was to keep Itowns camera positions unchanged while taking the benefits of a rendering near 3D origin. Again, it didn't fit that easily. -- Move everything but the cameras + VR parent group, aka applying camera inverse WorldMatrix to the root scene and reset cameras matrices. Again and again, Itowns is not fit for that purpose.
  2. In investigation
  3. Find a way to identify which device is in use ?

Identified use-cases

known issues

Documentation

Can be tested with Chrome extensions such as :