Closed RafaelCintron closed 6 years ago
Just a few notes, if it helps:
A few months back, I was playing with WebDriver, with some success. As far as CI goes, Browserstack is the only (popular) service I know of with GPUs such that WebGL can be support.
Here are some useful notes on known support for testing WebGL (both locally and using CI).
@RafaelCintron. Thanks for sharing your idea. Mozilla is working on WebVR conformance tests as well. I am positive to share our test files into WebVR platform tests.
For the beginning, I would like to make sure I understand how you set your test data into VR devices. IIUC, you use PerceptionSimulation APIs to send data to Windows Holographic, right? But, it would not be unified for other platforms, therefore, I am thinking to define a mock device API for testing, like WebBlueTooth's proposal
Currently, Mozilla is making a fake VR device at Gecko's backend that could be a custom OpenVR driver or a puppet device, whatever. And we are thinking to use the mock device API that I mention above to help us send test data to the fake VR device. From our experience of Gamepad API, we make a GamepadServiceTest.idl
to help us push test data to the gamepad module at Gecko's backend. This way is I wanna follow for VR module, but I am interested in your approach as well.
@daoshengmu I noticed that you mentioned that Mozilla is making a fake VR device. Is there any issue I could track this?
@shaoboyan, You can track our status here, https://bugzilla.mozilla.org/show_bug.cgi?id=1323328.
Thank you for your input, @daoshengmu . Edge currently exposes additional VR specific WebDriver commands to facilitate testing: AttachVRDisplay, DetachVRDisplay, SetPositionAndRotation, ResetPositionAndRotation, SetHeadTrackingToOrientationOnly, SetHeadTrackingToPositionAndOrientation, etc. All of our WebVR tests run in WebDriver. If you are not running the browser under WebDriver, the commands do not exist, same as other WebDriver commands.
On the WebDriver side, we currently implement the commands using Windows Holographic Perception Simulation. But we (you) can change the implementation to use another method; it's up to you. Instead of PerceptionSimulationDriver
, as I have in my test snippet above, we can use VRSimulationDriver
, or another generic name.
For us, the working group, the important thing is to define a clear list of commands. That way, tests submitted to the WebPlatform tests will work in an interoperable fashion regardless of the implementation of the commands.
WDYT?
There was a session on testing device APIs at BlinkOn. It was focused on the Generic Sensor API, Web Bluetooth, and WebUSB, but I think similar issues and ideas probably apply to WebVR.
One issue that was mentioned is that you might end up defining an equally complex API/spec as the one you are testing, not to mention having to implement it in each browser.
There was some discussion of using WebUSB to talk to a device (shared by all implementations) that would respond to calls from the API under test. Another approach would be to define an HTTP-based protocol, which would allow more flexibility in how the device is implemented. (For example, a local server could forward commands via USB or to a system driver.)
For WebVR, it seems we could implement a simple HTTP server that communicates with the OpenVR test driver. We'd need to port it to various platforms, but all implementations could share it. (Implementations could also implement their own versions if they don't support OpenVR.)
@ddorwin, I think using a custom web server is one way to accomplish our goals.
In the test code I pasted above, Edge interacts with WebDriver in a unique way compared to standard WebDriver. At the beginning of a test, the WebDriver client sends the ExecuteAsyncScript WebDriver command to browser. The client patiently waits for a JSON response. For a simple test, the reponse is typically, "I passed", or "I failed". But, the response can also be AttachVRDisplay
, DetachVRDisplay
, SetPositionAndRotation
, etc. As you can read from my test sample, all of this is done using async promises in the test page itself. Once the client runs the command, it sends another ExecuteAsyncScriptScript command to the browser to continue the test. The test ends when the final pass/fail response gets sent to the WebDriver client by the browser.
For us, the important thing we should agree on now is the format of the tests, and the list of commands that are sent between the test page and the code that talks to device emulation layer. From there, we can agree on an API that tests can call during test execution. In my sample, you'll see calls to PerceptionSimulationDriver.AttachWebVRDisplays()
but we can call it something more generic like VRSimulator.AttachDisplays()
. The implementation of VRSimulator can be (in Edge) the WebDriver business I described OR (in Chrome) a calls to a webserver as @ddorwin described OR (in Firefox) calls to a VR service as @daoshengmu described.
@RafaelCintron is https://github.com/w3c/web-platform-tests/blob/master/webvr/idlharness.html the only completed test harness, or do you have more code to share? I've done quite a bit of PhantomJS/SlimerJS. I've deployed Google Chrome Lighthouse on a Debian DigitalOcean doublet, which is a quite nice setup for automated testing. I've been heads down lately developing various WebVR services (for multi-user, HTTP/2 Server Push, Webp-App Manifests, Service Workers, WebSockets, WebRTC Data Channels, and so forth). I can't completely commit to anything new just yet, but I might be able to provide some guidance here, and I'm generally curious and eager to improve the tooling for developer and end-consumer workflows for WebVR.
@RafaelCintron
Sorry for making you wait so long. In this couple of weeks, I take your test sample to integrate with our automated test system, and it has been landed into our master branch. (https://dxr.mozilla.org/mozilla-central/source/dom/vr/test) Thanks! In our test infrastructure, we do some conversion for putting the test into the automated test system.
My current approach is quite simple but it works for this short term milestone. We make a VRServiceTest
to response the request from VRSimulationDriver.js
at Firefox, and make a VRSystemPuppet
for emulating the real device at our VR module backend. In the future, I would like to replace it with an OpenVR test driver to communicate with Gecko. But, right now, I think the test cases are more important for us.
Let's discuss the commands that you mentioned:
AttachVRDisplay
--- For attaching the existing VR display with the test service. (Agree)
DetachVRDisplay
--- For detaching the existing VR display with the test service. (Agree)
SetPositionAndRotation
-- It looks like similar to SetVRDisplayPose in our VRSimulationDriver.js
. I am curious what are your parameters? In my case, I use
vrDisplay: VRMockDisplay,
position: float3,
linearVelocity: float3,
linearAcceleration: float3,
orientation: float4,
angularVelocity: float3,
angularAcceleration: float3
ResetPositionAndRotation
--- What the values for resetting that you want? position to (0,0,0) ? orientation to (0,0,0,1)?
SetHeadTrackingToOrientationOnly
--- It looks like you would like to set VRDisplayCapabilities to be hasOrientation only?
SetHeadTrackingToPositionAndOrientation
--- It looks like you would like to set VRDisplayCapabilities to be hasPosition and hasOrientation?
More commands I would like to add:
SetEyeResolution
--- (vrDisplay: VRMockDisplay, width: double, height: double) for setup the resolution for each eyes.
SetEyeParameter
--- (vrDisplay: VRMockDisplay, eye: VREye, offsetX: double, offsetY: double, offsetZ: double, upDegree: double, rightDegree: double, downDegree: double, leftDegree: double) for setup the resolution for each eyes.
UpdateVRDisplay
--- (vrDisplay: VRMockDisplay) for setting VREyeParameters and VRFrameData to VRDisplay at the backend.
@daoshengmu. Great to see the test get integrated into your automation! We have 20+ additional tests we authored during the course of developing WebVR 1.1.
Here is more detail on the API. AttachVRDisplay - Synthetically attaches a VR device to the system. Takes no parameters. DetachVRDisplay - Removes the VR device from the system. Takes no parameters. SetPositionAndOrientation - Sets the Users Position and Rotation. Takes three parameters 1) position: User position: an array of 3 floats representing X,Y,Z 2) orientation: Body Orientation: float value in degrees 3) headOrientation: Head Orientation: an array of 3 floats representing Roll, Pitch, and Yaw in degrees
ResetPositionAndRotation - Resets the users position and rotation to 0. Takes no parameters. We didn't really use this one in any tests so we can leave this one out for now.
SetHeadTrackingToOrientationOnly - Set the user tracking to orientation only: 3DOF. Takes no parameters. This function simulates tracking loss on HMDs like the HoloLens when you cover the sensors with your hands or switch off the lights in a room. Internally to Edge, we enter "3DOF mode" where the view rotates relative to the most recent position of the head rather than snapping back to the origin.
SetHeadTrackingToPositionAndOrientation - Set the users tracking to position and orientation: 6DOF Takes no parameters.
Unless I am missing something, I do not believe we need an UpdateVRDisplay
since that should already be covered by the ones above.
Can you please explain more the difference between SetEyeResolution
and SetEyeParameter
? Both functions have "setup the resolution for each eyes" for a description.
The discussion in https://lists.w3.org/Archives/Public/public-test-infra/2017JanMar/0030.html may be relevant.
@RafaelCintron Please ignore UpdateVRDisplay
, I think I can do it internally. SetEyeResolution
and SetEyeParameter
are used to assign the HMD information of each eyes. However, I think the resolution of each eyes would be the same. so I separate it into an another function to avoid the replicate work.
@SamiraAtMicrosoft has created a pull request in the W3C web-platform-test repository with an initial set of v1.1 tests. Please take a look and give feedback. More tests will be forthcoming.
Opening the discussion for this issue with the WebXR API in https://github.com/immersive-web/proposals/issues/8
Further discussion of this subject should happen on the webxr-test-api repo. Closing for general issue cleanup.
From time to time, the working group has expressed the desire to have an interoperable test suite for WebVR.
As we've been building WebVR v1.1, the Edge WebVR team has been writing tests using WebDriver, TestHarness,js and the PerceptionSimulation APIs that come with Windows Holographic.
All of our tests run in WebDriver and issue custom WebDriver commands to simulate various operations:
Under the covers, the WebDriver commands are implemented using Windows Holographic PerceptionSimulation. But other implementations can implement them using an alternate hooks. In one implementer call, @kearwood talked about writing a custom OpenVR driver to facilitate this in Firefox.
I think it would be worthwhile to pool our tests together and agree on a common API to simulate the VR implementations underneath. We share our test with each other via the WebPlatformTests
@kearwood , @toji , what do you think?
To kick off discussion, here is our requestPresent test.
Here is requestPresent.js