Closed CaffeineViking closed 5 years ago
Instead of Phone-Wire AA (which doesn't solve all the problems we're having), we use something like GPAA but simplified (since we're only rendering lines anyway). Trick is to use gl_FragCoord
which tells us the point of the fragment and the interpolated position
, which gives us the "center" on the thick line. We just need to make position
be in screen-space instead of world-space, and the distance between those is pixel distance from the center of the thick line. I was confused at first, since I was expecting to get "real" fragment positions from position
, but that's of course not true, since we're not doing any billboarding, and are using lines instead. The results are as expected, and remove all of the jaggies. I made the thickness configurable, and the opacity which we use for blending too. Increasing thickness is a bad idea in this case, since it will cause more fragments to be inserted into the PPLL, I've found that a value of 2 is good enough for the ponytail.
float gpaa(vec2 screen_fragment,
vec4 world_line,
mat4 view_projection,
vec2 resolution,
float line_thickness) {
// Transforms: world -> clip -> ndc -> screen.
vec4 clip_line = view_projection * world_line;
vec3 ndc_line = (clip_line.xyz / clip_line.w);
vec2 screen_line = ndc_line.xy; // really NDC.
// Transform NDC to screen-space to compare with samples.
screen_line = (screen_line + 1.0f) * (resolution / 2.0f);
// Distance is measured in screen-space grids.
float d = length(screen_line-screen_fragment);
// Finally the coverage is based on thickness.
return 1.00f - (d / (line_thickness / 2.00f));
}
Excellent work, how does it look :) ?
After implementing #28 we want to "fix" the aliasing in the rasterizer as well, which mostly happens in areas of low-density. We can solve this by using e.g. Phone-Wire AA, which essentially assumes a line is already a pixel-wide, and then seemingly reduces the thickness by assigning the alpha values of the fragment, relative to the thickness reduction ratio. Emil provides us with a sample implementation of it, which works like this:
Maybe we can also get away by just using one pixel clamped lines and then changing the alpha component based on the thickness (which we change in the data itself). i.e. we modulate the alpha of the strand as pre-processing step based on the thickness, where the last 10-15% are interpolated towards 0, for thinning out.
Otherwise, we can use the strand coverage calculation as done in TressFX/TressFXRendering.hls#172 as well: