Open DigitalLibrarian opened 8 years ago
Crash Course in HLSL
http://www.catalinzima.com/xna/tutorials/crash-course-in-hlsl/
"GPU Gems" is available online
http://http.developer.nvidia.com/GPUGems/gpugems_part01.html
Unity shader tutorials from Riemers:
https://digitalerr0r.wordpress.com/tutorials/
Scroll down and there are xna 3.1 versions of the tutorials as well.
Diffuse lighting maths explained
https://digitalerr0r.wordpress.com/2015/09/18/unity-5-shader-programming-2-diffuse-light/
Now it seems this has become a problem of calculating face normals on the video card. I do not want to send them for each face or vertex.
I might be able to exploit the axis aligned nature of these things. It might be possible to exploit knowledge of the triangle draw order and cycle through a list of pre-calculated normals.
More HLSL xna tutorials
http://rbwhitaker.wikidot.com/hlsl-tutorials
Try just stealing the shader at the end of the specular lighting tutorial. It should be able to do ambient, diffuse, and specular lighting with little to no modification.
Need to experiment with adding bump maps for voxels to give them a slight grain. This could help with the "clay" look and feel.
Calculating Normals in Shader maths tip
http://tonfilm.blogspot.com/2007/01/calculate-normals-in-shader.html
Re: normals
Look out for non-uniform scaling factors
http://www.lighthouse3d.com/tutorials/glsl-12-tutorial/the-normal-matrix/
"Witch of Agnesi" method for calculating normals on arbitrary meshes
Per pixel lighting using the Phong-Blinn lighting model implemented in HLSL
Various things I have read imply that one can use the "gradient" of the voxel model, to produce a direction vector which is basically the normal. However, I haven't seen anybody spell out exactly what this means.
I believe is this the algorithm for calculating surface normals.
Infer surface normals from Depth Buffer
http://ttic.uchicago.edu/~cotter/projects/voxel/doxygen/mrf.html
Gradient Estimation in Volume Data using 4D Linear Regression
Normal Generation in the Pixel Shader (and lots of good links)
http://www.enkisoftware.com/devlogpost-20150131-1-Normal_generation_in_the_pixel_shader.html
Heuristic method for "Finding Surface Normals From Voxels"
http://www.ppmsite.com/sibgrapi2007/finding_surface_normals_from_voxels.pdf
Here is an implementation of the obvious way to take a list of triangles and generate normals for them.
http://www.riemers.net/eng/Tutorials/XNA/Csharp/ShortTuts/Normal_generation.php
We are going to have to send some normal information in the vertex buffer as the vertex shader cannot get mutiple points on a triangle at the same time, to calculate it.
In light of this, the best we can do is send a short where values 0-5 represent one of the 6 axis aligned normal vectors. The vertex definition will have to include this and the shader can look it up to compute lighting.
While we are at it, there should be a scheme for only sending 2 byte floats for the coordinates (so 6 total, rather than the 12 bytes we send per chunk position atm). This could probably be normalized as int, and the GPU can scale up to the world coordinate as needed.
Normal generation and using eye-relative positioning to calculate normals
http://www.enkisoftware.com/devlogpost-20150131-1-Normal_generation_in_the_pixel_shader.html
"Computing normals in a fragment shader"
https://bitbucket.org/volumesoffun/polyvox/wiki/Computing%20normals%20in%20a%20fragment%20shader
That last link posits that calculating the normal in a fragment shader is as easy as :
float3 worldNormal = cross(ddy(inWorldPosition.xyz), ddx(inWorldPosition.xyz)); worldNormal = normalize(worldNormal);
The current shader, which has diffuse lighting, gets some artifacts at certain angles at a distance. Just something to check for when revisiting.
Create techniques for diffuse lighting toggle
The current shader is ugly as hell. Simply adding directional light would go a long way towards getting that 'clay' look and feel that I'm shooting for.
The lighting models are relatively simple math. The hard part is that calculating directional lighting requires that you know the normal for each vertex.
This is not something I with to send to the GPU, so the cube nature of a chunk should be exploited and the normals calculated on the fly in the GPU.