KhronosGroup / WebGL

The Official Khronos WebGL Repository
Other
2.64k stars 669 forks source link

Device Info APIs #2548

Open n8schloss opened 6 years ago

n8schloss commented 6 years ago

Hey all,

During W3C TPAC one item that the web perf working group discussed was adding an api (https://github.com/w3c/device-memory/issues/15) to be able to get 'WEBGL_debug_renderer_info' info without having to spin up a web gl context. There are many users with lower end GPUs that can't really handle many of the experiences we want to ship so we need to get this info. Unfortunately the users with the lower end GPUs are the same ones where spinning up a web gl context can be extra expensive, there are some cases where spinning up a web gl context can cause the screen to appear to freeze for whole seconds. To address this we want to make an api that allows for this info to be exposed without having to make a web gl context.

There's not any new info being exposed here so fingerprinting isn't really a big concern. This mainly will just avoid paying the large cost of setting up a web gl context for the people who really shouldn't be using one. Also even without 'WEBGL_debug_renderer_info' being exposed the performance characteristics of the gpu will always be observable so all we do by not exposing this info is force the people on lower end devices to pay a large cost to render things.

One of the follow ups from TPAC was to see if anyone here has any big concerns before we move forward. Like I said this information is already available today and we need to and do get it, but doing so can be a very bad user experience on some devices.

kenrussell commented 6 years ago

I'm concerned about adding yet another querying mechanism to the browser that will be essentially incompatible with both WebGL and the forthcoming GPUWeb API – it probably won't return correct information that can be used in conjunction with either.

I doubt very much that anyone's validly using WEBGL_debug_renderer_info to get information about the amount of GPU memory available. The WebGL working group asked the OpenGL and OpenGL ES working groups a long time ago to standardize on a way to report the amount of memory available to the GPU, but there are too many differences among GPU types about which kind of memory can be used for what purpose, whether the system uses a unified memory architecture, etc. to provide a simple answer to that question. We probably could have used OS specific APIs but decided to recommend other heuristics to developers which have worked well – like looking at the browser's window size and assuming that you have a certain number of bytes per pixel available. The strings that come back from WEBGL_debug_renderer_info can only be obtained by creating an OpenGL or OpenGL ES context, aren't standardized in any way, and deliberately don't contain information about GPU memory. I suspect that most pages querying WEBGL_debug_renderer_info are doing so unnecessarily. If you can show a counterexample please do. (Google Maps required this information in order to detect and blacklist poorly performing GPUs, and couldn't ship without it – this was the reason Chrome shipped the extension universally.)

If you think you can provide a useful piece of information to web developers then please prototype your solution on Windows, Linux, macOS and Android (and, ideally, iOS too) and share it here before setting any web API in stone.

Also, the only situation I've seen where creating a WebGL context is slow is on dual-GPU MacBooks where it causes a switch from the integrated to the discrete GPU, and we're working on optimizing it. If you find other situations where it's unexpectedly slow then please file bugs against browsers (http://crbug.com/ , https://bugzilla.mozilla.org , https://bugs.webkit.org/ ) and include enough detailed GPU information that the problem can be reproduced. Also please share the bug IDs with folks somehow (me, @zhenyao or @kainino0x for Chrome; @jdashg for Firefox; @grorg for Safari) so they're triaged quickly. Thanks.