Closed fernandojsg closed 4 years ago
Some progress...
As of https://github.com/mrdoob/three.js/commit/07b089637dab162e634238a8ea820daf65e06d91, this is how the API looks like:
renderer.vr.enabled = true;
renderer.animate( update ); // this does the requestAnimationFrame() for the user
WEBVR.getVRDisplay( function ( device ) {
renderer.vr.setDevice( device );
document.body.appendChild( WEBVR.getButton( device, renderer.domElement ) );
} );
http://rawgit.com/mrdoob/three.js/dev/examples/webvr_daydream.html
Bye bye VRControls
and VREffect
👋
@mrdoob Cool, great stuff ❤️ .
I think we should gradually adjust the other VR examples and deprecate VRControls
and VREffect
.
@mrdoob I like it!, I'm just not sure about the name of getCamera()
as is not really returning a camera but updating the camera matrices with the framedata, maybe something like updateCamera()
or so?
@fernandojsg Yeah, wasn't sure about the name. That method is supposed to be used internally although is publicly accessible.
Piling on because there is some relevance.
A problem I've observed with the current WebGLRenderer is that the render is monolithic, it does a lot of work in one opaque function. As a proof of concept I've looked at splitting up the calls (these could still be wrapped in a single render) to allow more client side configuration. @mrdoob I'm interested in your thoughts.
In a full PR I would imagine that the context of the render would be capture and would flow each call and temporary state would be removed from the renderer.
At an api call level this is what I'm looking at, this is working:
<html>
<body>
<script src="three.min.js"></script>
<script>
var scene, visibiltyCamera, leftCamera, rightCamera, renderer;
var geometry, material, mesh, mesh2;
init();
animate();
function prepareCamera(camera) {
if ( camera.parent === null ) camera.updateMatrixWorld();
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
}
function init() {
scene = new THREE.Scene();
visibiltyCamera = new THREE.PerspectiveCamera( 85, window.innerWidth / window.innerHeight, 1, 10000 );
visibiltyCamera.position.z = 1000;
leftCamera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 1, 10000 );
leftCamera.position.x = -300;
leftCamera.position.z = 1000;
rightCamera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 1, 10000 );
rightCamera.position.x = 300;
rightCamera.position.z = 1000;
geometry = new THREE.BoxGeometry( 200, 200, 200 );
material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
mesh2 = new THREE.Mesh( geometry, material );
mesh2.position.x = 1000;
scene.add( mesh2 );
renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
}
function mySortStable( a, b ) {
return a.id - b.id;
}
function myReverseSortStable( a, b ) {
return b.id - a.id;
}
function animate() {
requestAnimationFrame( animate );
mesh.rotation.x += 0.01;
mesh.rotation.y += 0.02;
prepareCamera(visibiltyCamera);
prepareCamera(leftCamera);
prepareCamera(rightCamera);
scene.updateMatrixWorld();
renderer.autoClear = false;
// premise is to optionally replace the renderer call with functions split by stages
// renderer.render( scene, rightCamera );
renderer.renderReset();
// renderSetup is covered by prepareCamera and is therefore optional
// I'd propose removing this
//renderer.renderSetup( scene, visibiltyCamera );
// allows much more control of the renderlist and places sort configuration in the hands of developers
renderer.renderGenerateRenderList(
scene,
visibiltyCamera,
undefined,
undefined,
{opaqueSort: mySortStable, transparentSort: myReverseSortStable} );
// visibiltyCamera is only required for a layer visible tests and therefore the dependency can be reduced
renderer.renderShadows(scene, visibiltyCamera)
// set viewport and render for each eye
// this could encompass array cameras for multiview
renderer.setViewport(0, 0, 500, 500);
renderer.renderDisplayCamera( scene, leftCamera );
renderer.setViewport(500, 0, 500, 500);
renderer.renderDisplayCamera( scene, rightCamera );
// post amble require for render to texture
renderer.renderFinishRenderTarget();
}
</script>
</body>
</html>
I'm interested in new web technologies to improve the performance. Let me share micro benchmarks I've been making lately.
WebAssembly https://takahirox.github.io/WebAssembly-benchmark/ WebWorkers https://takahirox.github.io/WebWorkers-benchmark/ SIMD.js https://takahirox.github.io/SIMDJS-benchmark/
I'd be glad if they'd help you understand their performance characteristics.
My impression of them are
@mikearmstrong001 I agree with you that we must simplify and modularize the render itself. Probably we could start following a similar approach as the one discussed on https://github.com/mrdoob/three.js/issues/11475 for materials, so we could end up having a dictionary of render steps that we could easily replace by our own code.
@takahirox context switching between WASM and JS is still expensive so I agree that we won't get any benefits by just moving small functions to WASM, as you said, it must be something CPU intensive to get the real benefits maybe file format parser is something that could get benefits from both WASM and webworkers. As far as I know SIMD.js is going to be deprecated anytime soon, and it will be part of the WASM spec :/
@mikaelgramont @takahirox as these two are quite extensive topics to discuss, what about moving the discussions to a new issue and link them here?
@mikearmstrong001 Sorry for the delay.
That API looks good to me. I've been slowly modularising render()
and some of the methods you propose are already there, it's just a matter of exposing them.
How about:
renderer.setOpaqueSort( mySortStable );
renderer.setTransparentSort( myReverseSortStable );
renderer.renderReset();
renderer.renderBackground( scene, camera );
renderer.renderGenerateShadows( scene, camera );
renderer.renderGenerateRenderList( scene, camera );
renderer.renderDisplayCamera( scene, camera );
renderer.renderFinishRenderTarget();
@mrdoob no worries, thank you for discussing!
The api changes look good.
I would be of the general opinion that temporary state should go into a context object which is passed from function to function however this is not in the current three.js makeup and is by no means a deal breaker. (My reasoning for a context object is that it clearly connects the context of separate function calls)
Philosophically, I believe exposing more of the internal functions makes sense as does providing a means to externally control behaviour (eg sorting). It would be super useful if developers using three.js could replace any aspect of three.js in part without having to also replace unrelated functionality. This goes for shaders, renderloop etc
context switching between WASM and JS is still expensive
@fernandojsg Is this still the case? Some of your colleagues and other volunteers have made very significant advances in wasm-bindgen and https://github.com/rustwasm
I think we can now close this issue.
Three.js is currently the most popular choice when developing WebVR, We, at mozilla, use it as base for our https://github.com/aframevr/aframe framework. But when it comes to performance in WebVR, threejs still have plenty of room for improvement.
I've been collecting a list of features that could be great to have implemented in three.js in order to deliver a much performant VR experience. I wanted to have a place discuss about all of them as an overall vision of the engine modification. And later on we could keep creating an issue for each iteam and keep the discussion there.
I know three.js is not focused just on WebVR so I understand some of the proposal won’t be part of the main render path (let’s say foveated rendering), but modules that the user could enable or even automatically when a webvr project is detected. Still most of the proposals will help the main render path even if not not using webvr.
[X] Scene render should accept array of cameras.(https://github.com/mrdoob/three.js/issues/10927)
[X] Common frustum for both eyes to be used by the ArrayCamera for faster frustum culling (Based on the diagram by Cass Everitt. Already being proposed to be part of the WebVR API (Being discussed on the webvr spec https://github.com/w3c/webvr/issues/203)
[x] Reduce drawcalls when rendering in stereo by using instancing to double the geometry with a single drawcall. We'll need dynamic clipping planes on the shader or, ideally and hopefully, have an extension like the proposed https://www.khronos.org/registry/webgl/extensions/proposals/WEBGL_multiview available in WebGL.
[ ] Foveated rendering: The simplest approach is to render sharp where the user is looking and lowres/blurry on the periphery, but there're plenty of ways to improve it, for example: Perceptually based (contrast preserving) rendering as exposed on: https://research.nvidia.com/publication/perceptually-based-foveated-virtual-reality. There's an ongoing discussion (https://github.com/w3c/webvr/issues/205) on how to implement some of the
lens matching/multires shading
features directly on the browser within the WebVR API, so it could help improving the overall performance together with the changes in the engine.[ ] Avoid using deferred rendering and go for a fully optimized forward rendering as MSAA>=x4 should be a must for VR. To deal with one of the biggest problem of forward vs deferred, the number of lights, we could go for a Clustered Forward rendered as used in many AAA engines nowadays with amazing results.
General optimizations/ideas not related directly to the engine itself but that could help improving the overall quality and performance of the VR experiences (And using the premise that if WebVR is available, WebGL2 will be available too):
[ ] Moving assets processing out of main thread (https://github.com/mrdoob/three.js/issues/11746). In WebVR every time that you don't submit frames to the headset fast enough you're kicked out to the vive/oculus lobby so if you're loading assets at runtime when the experience is already presenting in VR you'll annoy the user by jumping in/out of your experience when dropping frames. It's also important to notice that with Link traversal this problem is also important at loading time, as when you enter a new website from a previously presenting WebVR page, the browser will wait for a small period of time before you send the first frame, and if you don't do it fast enough the browser will stop presenting and return to 2D mode, requesting user intereaction again. So a good approach could be to try to move the assets parsing out from the main thread with the help of webworkers and serviceworkers and start presenting as soon as possible while the assets are being loaded in the background.
[ ] Use compressed textures WebGL2 support many compressed texture format available everywhere, we should take advantage of them to reduce bandwith and memory consumption.
[ ] Optimize for SIMD/WebAssembly Math Library or expensive functions: At first I thought about going SIMD with all the matrix functions using the new WebASM API, but it just doesn't work as we expected from the C/ASM times where you didn't have context switching and it was almost performance boost by just converting your functions to MMX/SSE/... Now with WebASM because of the context switching, type conversion and so on we need to be sure that we're jumping to a optimized webasm function that it's doing an enough hard work at once to get a real advantage. So it's better suitable for doing things like a 4 step Catmullclark subdivision of a huge mesh, than optimize a vector multiplication even if used hundreds of times in a frame.
[ ] "Automatic" LOD: To improve framerate in big scenes LOD is desirable, but generating LOD models from existing one is a tedious task, it will consume more bandwith and it will add another step to handle the assets sets. So an "automatic" (configurable) LOD could help a lot here.
[ ] Lightmaps generator Probably offtopic here but it could be great to have a way to improve the overall aesthethics of the scene as we can't now use things like SSAO, and this is a cheap and simple way to make it up.