Closed bfgeek closed 6 years ago
Adding an example for showing how to use this command API for updating the pose of vr input devices in JS.
var vrControllerModel; // VR controller model for displaying in the 3d space with WebGL.
for (var i = 0; i < vrDisplay.inputDevices.length; ++i) {
var inputDevice = vrDisplay.inputDevices[i];
inputDevice.addEventListener('posemove', (e) => {
var vrController = e.target;
var controllerMat;
if (vrController.pose) {
getPoseMatrix(controllerMat, vrController.pose);
vrControllerModel.drawWithMatrix(controllerMat);
}
});
}
It sounds like a good idea to expose various VR input devices, even including the emerging hand tracking and gesture recognition for some VR/MR HMD.
Adding an example of using gaze and click gesture to select an object in WebVR.
var raycaster; // Gaze raycaster by head position and orientation.
for (var i = 0; i < vrDisplay.inputDevices.length; ++i) {
var inputDevice = vrDisplay.inputDevices[i];
if (inputDevice instanceof VRHandTrackingDevice) {
inputDevice.addEventListener('click', (e) => {
var element = raycaster.intersectedEls[0];
element.emit('select');
});
}
}
According to WebVR Origin Trial results so far (Chrome 57),
A fair amount of developer feedback centered around difficulty of using the gamepad API for input. A dedicated VR input API is being considered as a result.
Should we scope VRInputDevice
for 2.0? @toji @cvan @NellWaliczek , thoughts?
This is resolved by the new input system for WebXR. closing.
We briefly chatted about this at the Seattle F2F I thought I'd just capture the discussion (from what I remember here) and so we can build it into an explainer.
Ideally you want to have inputDevices attached on the vrDisplay, e.g.
It'd be nice to have a proper class hierarchy for all the different types of input devices, e.g. (I'm just going to use IDL quickly to show this).
each of these would have specific API which is appropriate for that device, e.g. the voice device wouldn't have "buttons".
This could work with the gamepad API, e.g.
Depending on what is on the input device this would mean that you could hook up an "action" event which a vrDisplay could use internally. E.g.
Developers could also provide their own input devices based off the Bluetooth or WebUSB spec, e.g.
cc/ @esprehn
The proposed API should be written up in an explainer first. E.g.
https://github.com/w3c/ServiceWorker/blob/master/explainer.md https://github.com/WICG/IntersectionObserver/blob/gh-pages/explainer.md https://github.com/w3c/css-houdini-drafts/blob/master/css-layout-api/EXPLAINER.md