immersive-web / webxr-hand-input

A feature repo for working on hand input support in WebXR. Feature lead: Manish Goregaokar
https://immersive-web.github.io/webxr-hand-input/
Other
106 stars 17 forks source link

`fillPosePositions()` and `fillPoseOrientations()` for better dual quaternion support #88

Closed Squareys closed 3 years ago

Squareys commented 3 years ago

Hi all!

At the moment fillPoses (Explainer: https://github.com/immersive-web/webxr-hand-input/blob/main/explainer.md#efficiently-obtaining-hand-poses) added with #37 / #43 can only fill 4x4 Matrices. For Skinning, dual quaternions are quite popular and Wonderland Engine for example runs entirely on dual quaternions even.

The options an implementation using dual quaternions has is either to use fillPoses and convert every Matrix to a dual quaternion or avoid fillPoses and use XRJointPose / XRPose - which yields the rotation quaternion through XRPose.transform.orientation.

While these are both viable and work, they are not great.

I would suggest adding fillPosePositions() and fillPoseOrientations, which would also be more consistent with what is offered by XRRigidTransfom. Maybe renaming fillPoses to fillPoseMatrices.

Looking forward to your oppinions!

Best, Jonathan

cabanier commented 3 years ago

In order to do that, would the underlying implementation have access to these dual quaternions?

Squareys commented 3 years ago

Hi @cabanier!

If by underlying implementation you mean the UA's implementation of the WebXR API, no. It exposes only the position and orientation, creating a dual quaternion from those is close enough to trivial and useful even outside of dual quats.

It already has all the information (see XRPose.transform.orientation and XRPose.transform.position), so a batch-getter for saving allocations should be straight forward to implement for the UAs.

cabanier commented 3 years ago

We provided the matrix because we assumed that most implementations used matrices under the hood and that for others, the conversion from matrix to position/quaternion was simple enough. Do you have data that this conversion is taking too much time?

Squareys commented 3 years ago

I guess not, judging by a very quick-and-dirty benchmark: time measured is converting 24*2 joints with both methods, averaging measurements over 100k repetitions.

fromMat4: 0.025 ms
fromRotationTranslation: 0.006 ms

Benchmark is attached, using glMatrix.

Note, this is on a i7-4790K @ 4 GHz, not sure what the results would equate to on the Oculus Browser and Quest 1 for example, but I guess it's not going to be dramatic.

Benchmark Source Code ```javascript /** npm i gl-matrix */ const glMatrix = require('gl-matrix'); let start = 0; /** Get elapsed time since start in ms */ const elapsed_time = function() { return process.hrtime(start)[1] / 1000000; } const avg = function(list) { let sum = 0; for(const v of list) { sum += v; } return sum/list.length; } const NUM_JOINTS = 24; const SIZEOF_QUAT2 = 8; const SIZEOF_MAT4 = 16; const NUM_POSES = 2*NUM_JOINTS; const ITERATIONS = 100000; const out = new Float32Array(NUM_POSES*SIZEOF_QUAT2); const matrices = new Float32Array(NUM_POSES*SIZEOF_MAT4); const positions = new Float32Array(NUM_POSES*3); const orientations = new Float32Array(NUM_POSES*4); const measurements = new Array(ITERATIONS); for(let j = 0; j < ITERATIONS; ++j) { start = process.hrtime(); for(let i = 0; i < NUM_POSES; ++i) { glMatrix.quat2.fromMat4(out.subarray(i*SIZEOF_QUAT2), matrices.subarray(i*SIZEOF_MAT4)); } measurements[j] = elapsed_time(); } console.log("fromMat4:", avg(measurements).toFixed(3), "ms"); for(let j = 0; j < ITERATIONS; ++j) { start = process.hrtime(); for(let i = 0; i < NUM_POSES; ++i) { glMatrix.quat2.fromRotationTranslation(out.subarray(i*SIZEOF_QUAT2), orientations.subarray(i*4), positions.subarray(i*3)); } measurements[j] = elapsed_time(); } console.log("fromRotationTranslation:", avg(measurements).toFixed(3), "ms"); ```
cabanier commented 3 years ago

Thanks for taking the time to benchmark this! Please let us know if you find any bottlenecks. Performance is the biggest issue with WebXR so we're always looking for ways to improve that.

bjornbytes commented 3 years ago

Just want to add another opinion that using matrices for joint transforms does feel awkward given that OpenVR, OpenXR, and VrApi all use plain position and/or orientation. Using matrices requires twice as much storage and requires a matrix decomposition to get to the relevant info. My framework is exposing the data as [ x, y, z, _, qx, qy, qz, qw ], so I'm probably stuck doing some sort of conversion no matter what. Anyway, I'm not blocked by the current API but do think the availability of a format other than mat4 would be an improvement.

cabanier commented 3 years ago

...Anyway, I'm not blocked by the current API but do think the availability of a format other than mat4 would be an improvement.

That's a reasonable request. Do you have a benchmark that shows this would save a lot of cpu time?