CaffeineViking / vkhr

Real-Time Hybrid Hair Rendering using Vulkan™
MIT License
446 stars 34 forks source link

Implement a Simulation Pass #17

Open CaffeineViking opened 5 years ago

CaffeineViking commented 5 years ago

There were concerns in the latest call (2018-12-03) that we should maybe have a very simple animation of the strands to validate that the techniques we have implemented work OK when animated as well. A very good question is if the approximated deep shadow solution will completely break apart when animated, as exactly how much visual "error" will happen is uncertain (it could be well within limits, if so, then we're fine).

Crystal Dynamics solves the issues in the technique by splitting the hair style in several parts that have roughly the same "density", and assumed that the density remains mostly unchanged. In our case, since we don't split the hair style we could get very wrong results since the hair density could be completely off. As suggested, we could store a pre-calculated "thickness map", like in Colin-Barré's fast SSS method, and use that to infer the density (this would involve a 3-D texture for the density. Another way would be to store a per-strand value that says how densely clustered strands are in the local neighborhood. The latter technique I believe to be a better solution, and not attempted before in related work AFAIK). Of course, that still won't solve the problem of having a high deviation from the original density values when animated.

We need to think about this. If the error margins are too high we might have to either switch the self-shadow technique, or compensate for it somehow. We should discuss this in the next call as well I think, I won't try to get started on this just yet, as the other issues currently take major implementation priority.

It would also be interesting to see the animated results, especially of the voxelization and how e.g. AO looks like, since a major feature of our method is that the results can be animated (i.e. the technique is "general").

The simplest solution would be to take the TressFX simulation code and integrate it. But for now, we have opted not to do this, since other things were more important. However, if I or anyone else ever gets around to it, the TressFX/TressFXSimulation.hlsl shader seems ripe for the taking (and is under a nice license). Since we are using SPIR-V anyway, and our glslc.py script can compile HLSL --> SPIR-V we can maybe even use the shader as-is, and just provide the right descriptor bindings to it. Anyway, this is future work. Good luck!