immersive-web / webxr

Repository for the WebXR Device API Specification.
https://immersive-web.github.io/webxr/
Other
2.99k stars 384 forks source link

Interoperable WebVR Testing #187

Closed RafaelCintron closed 6 years ago

RafaelCintron commented 7 years ago

From time to time, the working group has expressed the desire to have an interoperable test suite for WebVR.

As we've been building WebVR v1.1, the Edge WebVR team has been writing tests using WebDriver, TestHarness,js and the PerceptionSimulation APIs that come with Windows Holographic.

All of our tests run in WebDriver and issue custom WebDriver commands to simulate various operations:

Under the covers, the WebDriver commands are implemented using Windows Holographic PerceptionSimulation. But other implementations can implement them using an alternate hooks. In one implementer call, @kearwood talked about writing a custom OpenVR driver to facilitate this in Firefox.

I think it would be worthwhile to pool our tests together and agree on a common API to simulate the VR implementations underneath. We share our test with each other via the WebPlatformTests

@kearwood , @toji , what do you think?

To kick off discussion, here is our requestPresent test.

<!DOCTYPE html>
<html>
<head>
    <title>requestPresent</title>
    <meta name="timeout" content="long" />
    <script src="../../resources/testharness.js"></script>
    <script src="../../resources/testharnessreport.js"></script>
    <script src="../../vendor-imports/interop/common/webdriver.js"></script>
    <script src="PerceptionSimulationDriver.js"></script>
    <script src="WebVRHelpers.js"></script>
    <script src="requestPresent.js"></script>
</head>
<body id="body">
    <canvas id="webglCanvas"></canvas>
    <div id="testDiv"></div>
    <script>
        "use strict";
        var vrDisplay;
        var canvas = document.getElementById('webglCanvas');
        var div = document.getElementById('testDiv');

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{}]));
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected with empty frames");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas, leftBounds: [0.0, 0.0] }]));
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected with incorrect bounds (bounds arrays must be 0 or 4 long)");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: div }]));
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected with invalid source (must be canvas element)");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas, leftBounds: [div] }]));
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected with invalid bounds data type (must be able to convert to float)");

        const invalidBounds = [
            [2.0, 0.0, 0.0, 0.0],
            [0.0, 2.0, 0.0, 0.0],
            [0.0, 0.0, 2.0, 0.0],
            [0.0, 0.0, 0.0, 2.0],
            [-1.0, 0.0, 0.0, 0.0],
            [0.0, -1.0, 0.0, 0.0],
            [0.0, 0.0, -1.0, 0.0],
            [0.0, 0.0, 0.0, -1.0]];

        invalidBounds.forEach((bound) => {
            promise_test((test) => {
                return setupVRDisplay(test).then(() => {
                    return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas, leftBounds: bound }]));
                }).then(() => {
                    return validateDisplayNotPresenting(test, vrDisplay);
                });
            }, "WebVR requestPresent rejected with bounds in invalid range: [" + bound + "]");
        });

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                var promise = vrDisplay.requestPresent({ source: canvas });
                return promise_rejects(test, null, promise);
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected without user initiated action");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return promise_rejects(test, null, WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas }, { source: canvas }]));
            }).then(() => {
                return validateDisplayNotPresenting(test, vrDisplay);
            });
        }, "WebVR requestPresent rejected with more frames than max layers");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                function requestAgain() {
                    // Callback for immediate requestPresent call for further testing.
                    // Cache this promise on global object since it seems to be the only object
                    // in scope across calls.
                    window.promiseSecond = vrDisplay.requestPresent([{ source: canvas }]);
                }

                return WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas }], requestAgain);
            }).then(() => {
                // First promise succeeded
                assert_true(vrDisplay.isPresenting, "First promise should successfully fulfill");
                // Now, validate that the subsequent requestPresent was rejected
                return promise_rejects(test, null, window.promiseSecond);
            }).then(() => {
                delete window.promiseSecond;
                assert_true(vrDisplay.isPresenting, "Should still be presenting after rejected second promise");
                return vrDisplay.exitPresent();
            });
        }, "WebVR requestPresent fails while another one is in progress");

        promise_test((test) => {
            return setupVRDisplay(test).then(() => {
                return WebVRHelpers.RequestPresentOnVRDisplay(vrDisplay, [{ source: canvas }]);
            }).then(() => {
                assert_true(vrDisplay.isPresenting, "vrDisplay.isPresenting must be true if requestPresent is fulfilled.");
                assert_equals(vrDisplay.getLayers().length, 1, "vrDisplay.getLayers() should return one layer.");
                return vrDisplay.exitPresent();
            }).then(() => {
                assert_false(vrDisplay.isPresenting, "vrDisplay.isPresenting must be false if exitPresent is fulfilled.");
                // exitPresent() should reject since we are no longer presenting.
                return promise_rejects(test, null, vrDisplay.exitPresent());
            });
        }, "WebVR requestPresent fulfilled");
    </script>
</body>
</html>

Here is requestPresent.js

// requestPresent.js
//
// This file provides helpers for testing VRDisplay requestPresent.

function setupVRDisplay(test) {
    assert_equals(typeof (navigator.getVRDisplays), "function", "'navigator.getVRDisplays()' must be defined.");
    return PerceptionSimulationDriver.AttachWebVRDisplay().then(() => {
        return navigator.getVRDisplays();
    }).then((displays) => {
        assert_equals(displays.length, 1, "displays.length must be one after attach.");
        vrDisplay = displays[0];
        return validateNewVRDisplay(test, vrDisplay);
    });
}

// Validate the settings off a freshly created VRDisplay (prior to calling
// requestPresent).
function validateNewVRDisplay(test, display) {
    assert_true(display.capabilities.canPresent, "display.capabilities.canPresent must always be true for HMDs.");
    assert_equals(display.capabilities.maxLayers, 1, "display.capabilities.maxLayers must always be 1 when display.capabilities.canPresent is true for current spec revision.");
    assert_false(display.isPresenting, "display.isPresenting must be false before calling requestPresent.");
    assert_equals(display.getLayers().length, 0, "display.getLayers() should have no layers if not presenting.");
    var promise = display.exitPresent();
    return promise_rejects(test, null, promise);
}

// Validate the settings off a VRDisplay after requestPresent promise is
// rejected or after exitPresent is fulfilled.
function validateDisplayNotPresenting(test, display) {
    assert_false(display.isPresenting, "display.isPresenting must be false if requestPresent is rejected or after exitPresent is fulfilled.");
    assert_equals(display.getLayers().length, 0, "display.getLayers() should have no layers if requestPresent is rejected  or after exitPresent is fulfilled.");
    var promise = display.exitPresent();
    return promise_rejects(test, null, promise);
}
cvan commented 7 years ago

Just a few notes, if it helps:

A few months back, I was playing with WebDriver, with some success. As far as CI goes, Browserstack is the only (popular) service I know of with GPUs such that WebGL can be support.

Here are some useful notes on known support for testing WebGL (both locally and using CI).

daoshengmu commented 7 years ago

@RafaelCintron. Thanks for sharing your idea. Mozilla is working on WebVR conformance tests as well. I am positive to share our test files into WebVR platform tests.

For the beginning, I would like to make sure I understand how you set your test data into VR devices. IIUC, you use PerceptionSimulation APIs to send data to Windows Holographic, right? But, it would not be unified for other platforms, therefore, I am thinking to define a mock device API for testing, like WebBlueTooth's proposal

Currently, Mozilla is making a fake VR device at Gecko's backend that could be a custom OpenVR driver or a puppet device, whatever. And we are thinking to use the mock device API that I mention above to help us send test data to the fake VR device. From our experience of Gamepad API, we make a GamepadServiceTest.idl to help us push test data to the gamepad module at Gecko's backend. This way is I wanna follow for VR module, but I am interested in your approach as well.

shaoboyan commented 7 years ago

@daoshengmu I noticed that you mentioned that Mozilla is making a fake VR device. Is there any issue I could track this?

daoshengmu commented 7 years ago

@shaoboyan, You can track our status here, https://bugzilla.mozilla.org/show_bug.cgi?id=1323328.

RafaelCintron commented 7 years ago

Thank you for your input, @daoshengmu . Edge currently exposes additional VR specific WebDriver commands to facilitate testing: AttachVRDisplay, DetachVRDisplay, SetPositionAndRotation, ResetPositionAndRotation, SetHeadTrackingToOrientationOnly, SetHeadTrackingToPositionAndOrientation, etc. All of our WebVR tests run in WebDriver. If you are not running the browser under WebDriver, the commands do not exist, same as other WebDriver commands.

On the WebDriver side, we currently implement the commands using Windows Holographic Perception Simulation. But we (you) can change the implementation to use another method; it's up to you. Instead of PerceptionSimulationDriver, as I have in my test snippet above, we can use VRSimulationDriver, or another generic name.

For us, the working group, the important thing is to define a clear list of commands. That way, tests submitted to the WebPlatform tests will work in an interoperable fashion regardless of the implementation of the commands.

WDYT?

ddorwin commented 7 years ago

There was a session on testing device APIs at BlinkOn. It was focused on the Generic Sensor API, Web Bluetooth, and WebUSB, but I think similar issues and ideas probably apply to WebVR.

One issue that was mentioned is that you might end up defining an equally complex API/spec as the one you are testing, not to mention having to implement it in each browser.

There was some discussion of using WebUSB to talk to a device (shared by all implementations) that would respond to calls from the API under test. Another approach would be to define an HTTP-based protocol, which would allow more flexibility in how the device is implemented. (For example, a local server could forward commands via USB or to a system driver.)

For WebVR, it seems we could implement a simple HTTP server that communicates with the OpenVR test driver. We'd need to port it to various platforms, but all implementations could share it. (Implementations could also implement their own versions if they don't support OpenVR.)

RafaelCintron commented 7 years ago

@ddorwin, I think using a custom web server is one way to accomplish our goals.

In the test code I pasted above, Edge interacts with WebDriver in a unique way compared to standard WebDriver. At the beginning of a test, the WebDriver client sends the ExecuteAsyncScript WebDriver command to browser. The client patiently waits for a JSON response. For a simple test, the reponse is typically, "I passed", or "I failed". But, the response can also be AttachVRDisplay, DetachVRDisplay, SetPositionAndRotation, etc. As you can read from my test sample, all of this is done using async promises in the test page itself. Once the client runs the command, it sends another ExecuteAsyncScriptScript command to the browser to continue the test. The test ends when the final pass/fail response gets sent to the WebDriver client by the browser.

For us, the important thing we should agree on now is the format of the tests, and the list of commands that are sent between the test page and the code that talks to device emulation layer. From there, we can agree on an API that tests can call during test execution. In my sample, you'll see calls to PerceptionSimulationDriver.AttachWebVRDisplays() but we can call it something more generic like VRSimulator.AttachDisplays(). The implementation of VRSimulator can be (in Edge) the WebDriver business I described OR (in Chrome) a calls to a webserver as @ddorwin described OR (in Firefox) calls to a VR service as @daoshengmu described.

cvan commented 7 years ago

@RafaelCintron is https://github.com/w3c/web-platform-tests/blob/master/webvr/idlharness.html the only completed test harness, or do you have more code to share? I've done quite a bit of PhantomJS/SlimerJS. I've deployed Google Chrome Lighthouse on a Debian DigitalOcean doublet, which is a quite nice setup for automated testing. I've been heads down lately developing various WebVR services (for multi-user, HTTP/2 Server Push, Webp-App Manifests, Service Workers, WebSockets, WebRTC Data Channels, and so forth). I can't completely commit to anything new just yet, but I might be able to provide some guidance here, and I'm generally curious and eager to improve the tooling for developer and end-consumer workflows for WebVR.

daoshengmu commented 7 years ago

@RafaelCintron

Sorry for making you wait so long. In this couple of weeks, I take your test sample to integrate with our automated test system, and it has been landed into our master branch. (https://dxr.mozilla.org/mozilla-central/source/dom/vr/test) Thanks! In our test infrastructure, we do some conversion for putting the test into the automated test system.

My current approach is quite simple but it works for this short term milestone. We make a VRServiceTest to response the request from VRSimulationDriver.js at Firefox, and make a VRSystemPuppet for emulating the real device at our VR module backend. In the future, I would like to replace it with an OpenVR test driver to communicate with Gecko. But, right now, I think the test cases are more important for us.

Let's discuss the commands that you mentioned:

AttachVRDisplay --- For attaching the existing VR display with the test service. (Agree) DetachVRDisplay--- For detaching the existing VR display with the test service. (Agree) SetPositionAndRotation -- It looks like similar to SetVRDisplayPose in our VRSimulationDriver.js. I am curious what are your parameters? In my case, I use

vrDisplay: VRMockDisplay,
position: float3,
linearVelocity: float3,
linearAcceleration: float3,
orientation: float4,
angularVelocity: float3,
angularAcceleration: float3

ResetPositionAndRotation --- What the values for resetting that you want? position to (0,0,0) ? orientation to (0,0,0,1)? SetHeadTrackingToOrientationOnly --- It looks like you would like to set VRDisplayCapabilities to be hasOrientation only? SetHeadTrackingToPositionAndOrientation --- It looks like you would like to set VRDisplayCapabilities to be hasPosition and hasOrientation?

More commands I would like to add: SetEyeResolution --- (vrDisplay: VRMockDisplay, width: double, height: double) for setup the resolution for each eyes. SetEyeParameter --- (vrDisplay: VRMockDisplay, eye: VREye, offsetX: double, offsetY: double, offsetZ: double, upDegree: double, rightDegree: double, downDegree: double, leftDegree: double) for setup the resolution for each eyes. UpdateVRDisplay --- (vrDisplay: VRMockDisplay) for setting VREyeParameters and VRFrameData to VRDisplay at the backend.

RafaelCintron commented 7 years ago

@daoshengmu. Great to see the test get integrated into your automation! We have 20+ additional tests we authored during the course of developing WebVR 1.1.

Here is more detail on the API. AttachVRDisplay - Synthetically attaches a VR device to the system. Takes no parameters. DetachVRDisplay - Removes the VR device from the system. Takes no parameters. SetPositionAndOrientation - Sets the Users Position and Rotation. Takes three parameters 1) position: User position: an array of 3 floats representing X,Y,Z 2) orientation: Body Orientation: float value in degrees 3) headOrientation: Head Orientation: an array of 3 floats representing Roll, Pitch, and Yaw in degrees

ResetPositionAndRotation - Resets the users position and rotation to 0. Takes no parameters. We didn't really use this one in any tests so we can leave this one out for now. SetHeadTrackingToOrientationOnly - Set the user tracking to orientation only: 3DOF. Takes no parameters. This function simulates tracking loss on HMDs like the HoloLens when you cover the sensors with your hands or switch off the lights in a room. Internally to Edge, we enter "3DOF mode" where the view rotates relative to the most recent position of the head rather than snapping back to the origin.
SetHeadTrackingToPositionAndOrientation - Set the users tracking to position and orientation: 6DOF Takes no parameters.

Unless I am missing something, I do not believe we need an UpdateVRDisplay since that should already be covered by the ones above.

Can you please explain more the difference between SetEyeResolution and SetEyeParameter? Both functions have "setup the resolution for each eyes" for a description.

ddorwin commented 7 years ago

The discussion in https://lists.w3.org/Archives/Public/public-test-infra/2017JanMar/0030.html may be relevant.

daoshengmu commented 7 years ago

@RafaelCintron Please ignore UpdateVRDisplay, I think I can do it internally. SetEyeResolution and SetEyeParameter are used to assign the HMD information of each eyes. However, I think the resolution of each eyes would be the same. so I separate it into an another function to avoid the replicate work.

RafaelCintron commented 7 years ago

@SamiraAtMicrosoft has created a pull request in the W3C web-platform-test repository with an initial set of v1.1 tests. Please take a look and give feedback. More tests will be forthcoming.

offenwanger commented 6 years ago

Opening the discussion for this issue with the WebXR API in https://github.com/immersive-web/proposals/issues/8

toji commented 6 years ago

Further discussion of this subject should happen on the webxr-test-api repo. Closing for general issue cleanup.