CaffeineViking / vkhr

Real-Time Hybrid Hair Rendering using Vulkan™
MIT License
446 stars 34 forks source link

Visualize Hair Strand Density as a AO-term #19

Closed CaffeineViking closed 5 years ago

CaffeineViking commented 5 years ago

After implementing #18, we want to try interpreting the densities as some sort of AO-term for shadowing. This will require us to write code to upload and sample the volume in the strand's fragment shader. This is also a good opportunity to write and test that the volume is sampled correctly (in the correct coordinate-space).

CaffeineViking commented 5 years ago

I've implemented the above (I just created this issue to track progress a bit better, and to report some of my results). Here are the results I get when visualizing the inverse densities directly (we want more strands to mean more possibility for self-shadowing, hence 1.0 - density). I've tried it with the fast solution, and the high-quality voxelization, for us to compare results. The left is the fast one, right is the high-quality one.

Figures: both results use a 256³ voxelization of the ponytail, the one on the left is the voxelization where we count the number of vertices in a voxel, and the one on the right is the one where we count the number of segments in a voxel. The vertex-based one is fast, but a bit noisy (probably not good for direct visualization like this, once we gather the densities by using a raycasting I think we'll find the results are a lot better) and the segment-based one is very high-quality, but slow. Performance-wise, not a huge hit in performance just yet.

I'm going to add some additional results in the Captain's Log in an hour or so, I compare the shading and so on when enabling both the AO and the visibility from the shadow map. I think the results are better when we also add the AO-term. I'm still integrating the density map data into the ADSM-approach, so we may or may not need to add the AO-term manually as we're doing now since that will probably happen implicitly.

Anteru commented 5 years ago

That looks quite good, I wonder if the per-vertex one would look better if you try to take the max in a 3x3 neighborhood. With shading, both should look great though. The only problem I have is that the part where the pony-tail starts is not "occluded" enough, and that should be easy to solve by looking in a wider radius, hopefully (or we create a max based mip-map and use that -- this would move the filtering cost from volume sampling to generation, and that should be a win in theory.)

Promising results! Also, Merry Christmas! :christmas_tree: :santa:

CaffeineViking commented 5 years ago

I'll try to take the 3x3 max and see what we get, but it sounds like it should work (the problem is, like you said, that the radius isn't large enough). I've published a newer entry in the Captain's Log with the shaded results, and some other goodies (I'll show this to Jason tomorrow).

It's really cold up here in Sweden now! Have a Merry Christmas! :-)