Closed Usnul closed 9 years ago
Totally on board with this! We just need someone that will like to give it a to. @spite was playing with this a bit some weeks ago, maybe he has some code/api suggestions.
Some extra reading: http://the-witness.net/news/2013/09/shadow-mapping-summary-part-1/
My tests with shadowmapping are behind three.js current implementation. I was just exploring how the different algorithms work (mostly following the classic implementations and http://codeflow.org/entries/2013/feb/15/soft-shadow-mapping/). I'm supporting bias, and a few different filtering methods.
Surface acne, peterpanning, shadow maps clipping out of the view fustrum ... all these are problems that are more or less solved, but even AAA games struggle to achieve convincing shadows. Most have to be heavily tweaked by hand (CryEngine for instance, relies a lot on art direction), use Cascade Shadow Mapping, Summed-Area Variance Maps, some kind of filtering, and mix dynamic shadow mapping with static lightmaps. Unreal Engine recently switched to raytraced shadows, using signed distance fields.
It would be nice to have shadows that can be switched on and work with minimum tweaking, but I'm not entirely sure we're right there yet.
Hi all, I was going to post something about shadows but @Usnul beat me to it! btw Usnul, back in 2011, Three.js did in fact used to have stencil shadows as an option. But for some reason, they moved away from that method and went with shadowmaps. I've looked at most of the articles and papers that you all have mentioned and weighed the good and bad of all the various techniques.
Three.js shadowmaps are ok now, but performance on my 2014 smartphone is still pretty slow, and there are the rendering artifacts already mentioned. @spite mentioned Unity's new method, and I was going to post this video link so you can see what they are talking about. Check out minute 5:00 to 20:00 when you have 15 minutes to spare (this is the future of shadows I think): https://www.youtube.com/watch?v=DQt_OopZadI&list=UUBobmJyzsJ6Ll7UbfhI4iwQ
I have always been impressed with iq's distance field work over on Shadertoy and on mrdoob's glslsandbox.com. In this article he talks about soft shadows for free: http://www.iquilezles.org/www/articles/rmshadows/rmshadows.htm
Since the ray-tracing distance fields work fast, even on my smartphone, I was wondering if this could somehow be used for shadows and lighting effects inside Three.js's scene graph:
http://glslsandbox.com/e#20832.0
Somehow the Unity devs figured it out for their scenegraph in the video above - can we do it too?
I really believe that more ray-traced effects and even all parts of the rendering process will slowly but surely be moved to GPU ray tracing, as target platforms have more horsepower these days inside their graphics cards. It would be great if Three.js could use iq's distance fields for at least real-time soft shadows, AO, and maybe even SDFRenderer as an alternative to WebGLRenderer in the more distant future? :)
@erichlof I think you mean Unreal and not Unity. Signed distance fields are pretty convincing as a good shadowing method. Problem is in storage, you need to compute them (preferably on GPU) and you need a way to access them in a fast way from shaders. Unless i'm mistaken we have textures and arrays (as uniforms) for that. Without 3d textures you have to pack voxel into texels. Also, without using a spatial index (like a voxel octree) - you'd be wasting a lot of space to encode empty/filled areas. Spatial index, being a tree, doesn't present itself for linear memory allocation, it's not too easy to pack it into a texture (not impossible, but unpleasant).
It could be interesting to do volume approximation and use these cheap and precise functions like the ones iq presents. However, once you have triangles - ever poly needs a test in worst case, meaning if you have 9 polys in an object - you may need to do 9 tests - this doesn't scale, so need for encoding comes in (as a field of some nature capable of representing 3d space)
to sum up, the way i see it we have following problems:
I agree that it would be pretty amazing to have this.
few more relevant articles:
@Usnul ah yes.. I did mean Unreal engine rather than Unity (they both begin with the letters un - ha).
You brought up some great points about the challenges of representing the 3D world in a signed distance field. Unfortunately my limited experience is creating hobby 3D games using tools like Three.js (you can check out 2 games I've made so far in my github homepage). I know how to extract the functionality and algorithms I need to finish a simple game but admittedly I don't know much about the current representation of the scene graph in webglrenderer and how it all works under the hood.
However I do know that the SDF functions, like the ones iq has made popular, work very well across my devices, even my cell phone. So I don't know how to tackle all of the issues you raised but like you said, it would be great to have this in three.js
One last note, the unreal devs said they decreased the resolution of the SDF so it would run faster. I have noticed while on glslsandbox.com , when I choose 2 instead of 1 for resolution while running a SDF demo, it still looks great but it runs much better - something to think about if this method ever finds its way into three.
Thanks so much for the articles - I hadn't seen those before. I have a lot of reading to do :-)
The distance field shadow maps look really nice. Plus you can do ambient occlusion and possibly global illumination (voxel cone tracing?) with the same data. But if I understood the Unreal developers right (http://youtu.be/DQt_OopZadI?t=11m10s), their method (currently?) only works on rigid objects.
Wouldn't it be very time consuming to recompute the signed distance field from scratch for each frame? For static meshes, you could precompute the distance field for each mesh and then composite them in real time as you move meshes around. Same for morphed meshes, by blending precomputed distance fields (one for each morph target). But for skinned meshes or physically deformed soft bodies, you'd have to recompute the distance field from scratch, and that GPUGem algorithm already takes >60sec for a 10k triangle mesh.
The glslsandbox demo (http://glslsandbox.com/e#20832.0) has animated objects, but uses analytical functions for the distance field of the objects. And finding an analytical expression for the (approximate) distance field of a flying dragon flapping its wings could be a hard problem.
Maybe there are really fast algorithms for computing the distance field. The voxel cone tracing global illumination (nvidia paper) produces an octree with the voxelized scene (up to 512^3 voxels) in real time. However, the algorithm looks complex and uses advanced shading possibly not available in webgl. NVIDIA also uses this approach for their VXGI technology (https://developer.nvidia.com/gi-works). All demos use a rather low voxel resolution, which is ok for (indirect, low-frequency) global illumination, but certainly not high enough for (direct, high frequency) shadows.
Other facts about the Unreal implementation of the distance field shadows (see here and here):
As long as there is no paper or blog that describes how to implement distance field shadows for general scenes in detail (including how to handle animated objects, how to prevent artifacts, and how to implement everything on the GPU), this is a research area and won't likely replace shadow maps in three.js any time soon.
@crobi That's a lot of info to digest! GigaVoxel research (one you referenced) uses 2 kinds of representation:
With regards to dynamic meshes - using stencil buffer we could draw only dynamic meshes in regular shadow maps to save fragment computation time, and in typical scene for a game this should cull a lot of pixels from those shadow maps.
Regarding dynamic meshes in general: Beauty of hierarchical representation such as an octree is in the fact that you can store encapsulating parent and recompute only it, allowing rest of the hierarchy to remain unchanged.
Regarding representation: 2d textures can be used to represent 3d textures, not as efficiently, but it's possible. If there was enough interest and momentum behind this - we could think about encoding octrees in 2d textures also. Computing the field without geometry shaders is a challenge in my view, it can be done on CPU though. At the very least - if we could have excellent shadows which require async pre-computation and only work on static meshes - i'd say it's a huge win, majority of scenery is static in most popular applications.
@usnul In general, most of the problems could be solved somehow. But, having worked in computer graphics research myself, I know that there is a huge difference between an idea and a production ready implementation. Most shadow map algorithms have been presented in high quality papers which talk about all limitations and implementation details. Without this information, you would have to recruit someone to spend lots of time for research, implementation and maintenance of the code.
regular grid
Wouldn't that use too much memory? A 512^3 grid of byte values uses 128mb. And I'm not sure that resolution is high enough for nice looking shadows. A 1024^3 grid already uses 1gb, which is certainly too much.
Beauty of hierarchical representation such as an octree is in the fact that you can store encapsulating parent and recompute only it, allowing rest of the hierarchy to remain unchanged.
Sure, but even recomputing the distance field on a single animated 10k triangle mesh is far too expensive using regular distance field algorithms. There may be faster algorithms for computing the distance field on the GPU, I just don't know of any.
2d textures can be used to represent 3d textures
This is indeed a very simple change. More complex to emulate are atomic operations and random-access reads/writes. As far as I know, these are used by the voxel cone tracing algorithm. Rewriting algorithms to not use such non-webgl features requires more thinking.
we could think about encoding octrees in 2d textures also
Reading octrees encoded in 2d textures should be easy. Generating them offline on the CPU as well. I have just my doubts about generating sparse octree textures in webgl on the GPU in real time. For reference, here is an implementation of a full kd-tree encoded in a 2d texture - generated on the CPU, but used by the GPU for rendering.
Majority of scenery is static in most popular applications
For applications with only rigid objects, I can see how distance fields could be a pure win. For applications with non-rigid objects, one could mix distance field shadows and regular shadow map shadows (as you described), but I'm not sure that this would look strictly better than (cascaded) shadow maps, so it should be a user option at least.
Wouldn't that use too much memory? A 512^3 grid of byte values uses 128mb.
512^3 is actually the exact number used in voxel cone tracing paper: https://research.nvidia.com/sites/default/files/publications/GIVoxels-pg2011-authors.pdf the results they show are very convincing.
Sure, but even recomputing the distance field on a single animated 10k triangle mesh is far too expensive using regular distance field algorithms.
I grant you that, and i agree that creating the field itself is something that's probably a hard limitation. At the same time, one could use a mix of shadow maps for dynamic objects and distance field maps for static ones.
This is indeed a very simple change. More complex to emulate are atomic operations and random-access reads/writes.
Not necessarily, construction does require write access, but it doesn't have to be atomic as access is bounded to 1 voxel. During use there is no write access so consistency is a given.
Reading octrees encoded in 2d textures should be easy
Well there you go. I'm lacking in knowledge when it comes to shader programming, so thanks for clearing that up. Demo serves as a great proof of concept.
...but I'm not sure that this would look strictly better than (cascaded) shadow maps, so it should be a user option at least.
Sure, i agree. Unreal guys said that's what they do - use normal CSM for dynamic meshes, hence my suggestion. The reason behind this is this:
Given shadow mapping algorithm produces soft shadows with occlusion distance taken into account (i.e. softer further away from occluder), blending shadow from SDF with shadow map would produce better result for both - giving influence from dynamic object into SDF shadow, and similarly ensuring that shadow map benefits from at least low frequency shadows generated by SDF shadowing. Another nice thing about this is you wouldn't have to generate the whole shadow map, since you could use stencil to mark static geometry.
Another part is - dynamic meshes could still receive shadows based on SDF, so they would get GI influence from static meshes for free essentially.
512^3 is actually the exact number used in voxel cone tracing paper
Yes, but they use it for global illumination. You can compute global illumination at a very low frequency because errors in indirect lightning do not stand out. Shadows on the other hand need to be computed at a high resolution - you can make out about the feature size in shadows as in the original geometry. If you have a fence where you can see individual posts, you need to see them in the shadow as well. I'm pretty sure a 512^3 voxel representation of the scene is unusable for shadows.
Not necessarily, construction does require write access, but it doesn't have to be atomic
Actually, the algorithm as presented in the paper does require synchronization between threads for the construction of the octree (section 4.2.1). If you used a full octree, you could create a static layout of the octree nodes in the 2d texture. That's what the posted kd tree demo does. But a full octree uses too much memory. So if you use a dynamic layout of the octree nodes, where you store only those nodes that are actually needed, you will need some kind of synchronization between threads as they request memory. Plus random-access write, because you don't know where each octree node will end up in the texture.
You may be able to implement a sparse octree without atomics/random access write by somehow precomputing where each octree cell will end up in memory before writing to that memory, but I've never seen that being done. One idea would be to use a prefix sum for stream compaction (section 39.3.1).
@crobi If we used multiple webworkers - i agree, memory access would be an issue. However, if we create a JS object first and then serialize it in a single thread - this problem goes away, as far as concurrent access is concerned.
Yes, but they use it for global illumination. You can compute global illumination at a very low frequency because errors in indirect lightning do not stand out.
Yes and no. Basic stencil shadows do a very good job of providing high-frequency detail, but it doesn't do anything in between boolean values. Even low res density field with bleeding would produce underestimated soft shadows, which would add low-frequency detail. Thing of interiors being darker and large creases being accented softly. I do not advocate that low-resolution is an ideal solution, but rather a foot-in-the-door. To get people interested and to show that this can be done.
If we used multiple webworkers
Yes, for static objects. For precomputed distance fields, it doesn't really matter how you compute them. I was talking about real-time updated octrees - doing any heavy CPU computation that you have to repeat at each frame is a no-go for a web engine, in my opinion.
Yes and no. Basic stencil shadows do a very good job of providing high-frequency detail
Hm, not sure what you are talking about. What I meant is that you can get away with computing global illumination (GI) at a very low resolution. Computing dynamic GI at a high resolution is usually a waste, because noone can tell the difference in a moving scene (like in a game). It could be different if you are rendering an architecture preview. But for high quality GI, you precompute the lighting anyway.
Shadows on the other hand, you need to have a sufficiently high resolution. Because otherwise shadows of thin and detailed objects will look like blobs and you can easily tell that this is wrong. Of course, hard shadows look wrong as well, but that is a different problem. Everything is an approximation in computer graphics, but you usually invest more of your limited resources in shadows rather than in global illumination, because the brain is better at spotting errors in shadows.
@crobi agreed on first point, avoid heavy computation. Second point is just my opinion, you are right that larger resolution is better.
Shadow maps in three are pretty good, but in terms of technology behind them they are stuck in 1978-ish (Percentage Closer Filtering: L. Williams, Casting Curved Shadows on Curved Surfaces, Computer Graphics 12, 3 (August 1978), 270-274. )
Shadow maps as they are have many problems:
MS has a good article on this, i'm sure many of you have seen it: http://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx
Many of these issues can be avoided through use of stencil shadow maps, but these require geometry shaders which we don't have yet in webgl in order to be effective. Stencil shadows have their own set of problems, for one - they are dependent on scene geometry complexity a great deal and so add an exponential factor to performance overhead of a renderer (complex scenes will now take even longer to render proportional to their complexity).
Since we are somewhat stuck with shadow maps, there are a few things that shadow maps have picked up over the years to reduce aforementioned artifacts:
in fact, this paper on Variance Soft Shadow Mapping addresses most of the issues: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/VSSM_PG2010.pdf
For a decent overview on recent methods of shadowmapping, here are 2 good presentation: http://developer.download.nvidia.com/presentations/2008/GDC/GDC08_SoftShadowMapping.pdf http://advancedgraphics.marries.nl/presentationslides/18_variance_soft_shadow_mapping.pdf
Based on what i've seen/read, i believe a variant of VSM is needed. To deal with perspective aliasing, we need a robust support for CSM (cascaded shadow maps). In r69 (current release to date of writing this) CSMs are broken in three, even if they were not - there are many restrictions for them in three:
The reasons i believe these to be important:
For a game engine the shadows that three has to offer are lacking, i don't believe it has to be so. If we have a robust shadow mapping implementation with few parameters and CSM on by default - everyone who uses shadowmaps will benefit automatically, and new users will find it pleasing to be able to get amazing shadows with little-to-no tweaking out of the box.