Open xen2 opened 5 years ago
I barely know Xenko code, so adding this feels a little over my head at the moment... but I'm investigating it and I have some thoughts and questions:
Is Xenko skinning always on the GPU? I couldn't find any CPU skinning code.
Three methods of implementing Morph Target (Blend Shapes) are:
Method 1. GPU Vertex Shaders (recomputed every frame for every active morph target) Method 2. cPU VB/IB preparation (recomputed when they change, on the cPU) Method 3. GPU Compute Shaders (recomputed when they change, on the GPU)
Speaking only of the Engine/Rendering (not the Studio/Asset management part)... I think implementations look something like this:
Method 1. GPU Vertex Shaders (recomputed every frame for every active morph target)
(a) store the morph target data, and attach it to a mesh, much like Skinning does in xenko/sources/engine/Xenko.Rendering/Rendering/Mesh.cs
(b) download the necessary morph target data to the GPU, into buffers that are accessible to rendering (where and how?)
(c) write a MorphTargetRenderingFeature.cs, which allocates and uploads an array of morph target blend weights (and possibly morph target vertex offset/index), much like xenko/sources/engine/Xenko.Rendering/Rendering/SkinningRenderingFeature.cs
(d) Write a MorphTarget.xksl shader, hooked in before Skinning, with a PreTransformPosition() implementation that iterates morph targets, loads each morph target coordinate, uses the morph-weight to calculate and accumulate the blended offset, then applies the final offset to the coordinate
One downside of this approach is that it will repeat the morph target calculations every single frame... which would be a waste if there are lots of morph targets which are non-zero but seldom changed. (such as is common in avatar facial configuration)
Method 2. cPU VB/IB preparation (recomputed when they change, on the cPU)
This involves a pre-pass to modify VB/IB data, and then update or re-upload the GPU version. This would be similar to CPU skinning, but I don't see any code for this. Does Xenko have CPU Skinning code that would serve as an example?
Method 3. GPU Compute Shaders (recomputed when they change, on the GPU)
Performing the morph target calculations in a compute shader (only when morph weights change), is the most efficient method, but it requires coordination with the drawing code.
For example, one way to do this is to feed the raw mesh VB/IB buffers into a compute shader, and have it produce morphed output (either as new VB/IB buffers, or as StructuredBuffers), and then those output buffers need to be fed into the draw-calls instead of the raw mesh VB/IB buffers...
There are several levels of things I don't know how to do in Xenko in there. (a) The Xenko ComputeShader test/example only has StructuredBuffers. The graphics APIs support handing VB/IB buffers to Compute Shaders as raw/typed buffers (not structured buffers), but I don't know if this is punched all the way through the Xenko shading and api abstraction. (b) i don't know if there is some kind of clear pipeline mechanism to control which VB/IB buffers get fed into the draw calls.
It seems easiest to start with Method 1.
Of course making this work also requires (a) extending the Asset Handling and Editor Studio to support Morph Targets, (b) supporting animations of Morph Targets, (c) providing a means for code to control the blend weight for each target.
Hopefully that information is useful / helpful in some way.
https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_pref01.html This should help in the implementation of this technology.
Hey there!
I've recently been researching this engine for a project that I'm helping to get off of Unity, and so far I love what I see!
I was looking at this issue as I've been reviewing the source code today - especially the RenderingFeature interfaces, and I was wondering what would be a good place to really learn more about how the engine is structured so I can work on my own morph target implementation given our project depends quite a bit on that.
Any pointers anyone can offer? Is the skinning rendering feature a good starting point? That's what I've been looking at so far.
open collective page. Will be updated once we have all the deliverables and a possible budget: https://opencollective.com/stride3d/projects/morph-targets
Deliverables:
ModelComponent
would have a new collection of weights, which could be floats with values from 0 to 1 for example. Each of those weights relate to a morph target in the Model
referenced. Those values would be controlled from editor or at runtime through C#.We are open to change those, users and future contributor to this feature can share their thoughts and we'll update them accordingly.
For the work you deliver, you will receive at least $800 USD (see the projects page for the additional amounts raised). If you think a deliverable costs more money, please contact us through the accompanied Github ticket. We are more than willing to discuss about features vs budget. If you are interested please follow the steps described here
What is basic use case for a morph/blend animation? Something like lip syncing?
Facial expressions, character creation in MMOs, some baked simulation like animated cloth and special effects
I would like to pick this up and I think I've dug around the code enough now to get started.
My current plan is to implement the morph target updates in a compute shader with a CPU fallback for feature levels < 11.0. Reapplying each target every frame in a VS is just very inefficient and doing a CS for the higher-end and VS fallback for the low-end seems like a lot of added complexity.
I think connecting things with the editor UI is what I'm least clear on, but I'll see how it goes and ask on Discord if I can't make sense of it.
Hi @froce, awesome that you want to have a look at this. If you have any questions feel free to ask here or on Discord.
Hi, I would like to take up the Morph target implementation, I went through the codebase and deliverables and think can get it wrapped soon
Discord: Noah_27216 @Stride3D @stride-contributors @stride-bot @Eideren
Hi, I would like to take up the Morph target implementation, I went through the codebase and deliverables and think can this one wrapped soon
Discord: Noah_27216 @stride3d @stride-contributors @stride-bot @Eideren
there was allready an attempt of implementing it, but icant find it, it was abandoned as the person left ( afaik )
I don't see any assignees. I am new Stride3D and like to explore Stride3D for our projects. Facial expressions are at the heart of it. Also with blendshapes, one character model can be realized into multiple characters. Common techniques in sports videogames where all the players are different blend shapes hard preset on common 3D character base. I would first get in the vertex shaders, exposed to a ModelComponent with control variable as stated. I would have to look into Stride3D WPF UI(I believe) but should be fine along with help of community at discord for questions or blockers might run into!
@noa7 Added you in a thread over on discord since you sent your discord username over. Let me know if you would rather keep this over on github or through email.
This feature has an associated bounty, see this comment for more info. -Eideren
Is your feature request related to a problem? Please describe. Artist might prefer to work with a morph target animation workflow rather than bone animation (esp. for facial animation).
Describe the solution you'd like Support for Morph Target animation.
Describe alternatives you've considered Bone animation is an alternative but might not be enough in some cases (esp. facial animation).
Additional context https://en.wikipedia.org/wiki/Morph_target_animation