In our current voxel implementation, frame rates can be low (~20 FPS) when rendering a large voxel dataset in a full-screen view.
We know that the fragment shader is the bottleneck, because:
A DevTools profile shows JS time is <8ms per frame
Shrinking the pixel area of the view improves the FPS
Within the fragment shader, a few candidate bottlenecks include:
Intersections are sorted via bubble sort
Octree traversal introduces a lot of divergent branching
Property texture lookups are using a 2D "megatexture" to simulate 3D textures
Testing solutions for any of the above is not trivial, so we should start with some simple tests to confirm which one is the real bottleneck. Here are some preliminary ideas to test:
[ ] Intersections: compare performance with a simple shape vs. one with render bounds and clipping planes (a simple shape has INTERSECTION_COUNT == 1, hence no sorting)
[ ] Octree traversal: compare performance to a tileset with only 1 LOD
[ ] 2D "megatexture": Load a 3D array to a 2D megatexture, then animate a slice moving along a coordinate axis. Do we see significant frame rate difference when sampling the "fast" vs "slow" directions of the array? Are WebGL2 3D textures any less direction-dependent?
One short-term fix could be to apply a resolutionScale property to VoxelPrimitive to scale down the resources needed to render voxels on mid to lower-end devices while not affecting the rest of the scene.
In our current voxel implementation, frame rates can be low (~20 FPS) when rendering a large voxel dataset in a full-screen view.
We know that the fragment shader is the bottleneck, because:
Within the fragment shader, a few candidate bottlenecks include:
Testing solutions for any of the above is not trivial, so we should start with some simple tests to confirm which one is the real bottleneck. Here are some preliminary ideas to test:
INTERSECTION_COUNT == 1
, hence no sorting)