Closed chriscamacho closed 3 years ago
https://github.com/recp/assetkit could be a valuable resource...
I think it's worth having an in depth discussion of this, because GLTF support would be a huge feature to raylib. Let's do it, but let's do it right :)
https://github.com/recp/assetkit could be a valuable resource...
I don't know if introducing another library is the answer, as this still has the complexity of moving the data from their structs/data containers to raylib structs
I'm not convinced just yet that multiple shaders are needed, using a bit field uniform we can switch off features a particular sub mesh doesn't need, for example if there is no specular gloss texture in a material, then the shader skips that part of the calculation
re: assetkit I wasn't thinking about introducing the library, but rather looking at the code to see how its done!
@raysan5
Hi, here there are some points for discussion...
First, raylib current animation system has been designed (after long thinking) to adapt to multiple animations methods and file formats while using the minimum amount of required data to support the animations. As always, it's also designed with simplicity in mind.
raylib animation system use bones-based skeletal animations, Model
skeleton comes defined by:
// Model struct
// Animation data
int boneCount; // Number of bones
BoneInfo *bones; // Bones information (skeleton)
Transform *bindPose; // Bones base transformation (pose)
As you can see, bind pose defines base skeleton pose (all bones transformations, hierarchical).
One animation consists of multiple frames, every frame can be defines as a pose (like a keyframe), that is defined by an array of bones transformations.
// Model animation
typedef struct ModelAnimation {
int boneCount; // Number of bones
BoneInfo *bones; // Bones information (skeleton)
int frameCount; // Number of animation frames
Transform **framePoses; // Poses array by frame --> frames*bones*transforms
} ModelAnimation;
Model animations can be loaded independently of the model vertex data and applied to multiple models (as long as model bones match). Model animations can also be mixed if required.
Using pose information (bones-transformations), we calculate animated vertex positions every frame, right now on the CPU side. Every mesh contains the data required (up to 4 bones influence by vertex, it could be increased if required):
// Mesh struct
// Animation vertex data
float *animVertices; // Animated vertex positions (after bones transformations)
float *animNormals; // Animated normals (after bones transformations)
int *boneIds; // Vertex bone ids, up to 4 bones influence by vertex (skinning)
float *boneWeights; // Vertex bone weight, up to 4 bones influence by vertex (skinning)
To support GPU skinning, bones information and transform should be sent to shaders and applied there... but it requires big arrays of data sent to shader (UBO required) and also Transform Feedback support to retrieve transformed vertex (usually required for physics). raylib target OpenGL does not support those features by default. Also the gain for a few models won't be that big, it's more useful when combined with models instancing, neither supported by default...
Current design should be enough for animations support, independently of how that information comes from every file format, it should be possible to adapt it to raylib structures.
Not sure if I'm missing something on how animation works.
I don't know very much about the technical details of GPU vs CPU, and what would work or not. I found some resources though:
https://www.khronos.org/opengl/wiki/Skeletal_Animation That talks about skeletal animation, aka Skinned meshes
https://github.com/KhronosGroup/glTF-Tutorials/blob/master/gltfTutorial/gltfTutorial_020_Skins.md Here is a GLTF specific example, I don't see why OpenGL doesn't support certain features when GLTF examples are implemented in WebGL, OpenGL, OpenGLES.
@chriscamacho @raysan5 Please correct me if I am wrong, or just not understanding.. I just want animation support through GLTF and ready to code it lol
as far as I can tell almost all the animations on the sample models (khronos github) all use rotation scale and translation animations in the main, with a few using morph targets
sharing animations between models seems like an idea, until you have to create different looking meshes that are really identical but all work with the same animation, even games using MDL's with "standard" animations types walk, run, pain, die etc contain their own animation specific to the individual models.
While saving a snapshot of all the verts for every key frame used to be practical with such low poly models as yesteryear, even so called low poly model today have substantially more polys, add in the fact that you might have a machine gun with 5-6 key frames and a walk animation over 40-50 key frames both running independently and things get substantially more complex. Its also intended that you can have say a translation and a rotation animation working on the same sub mesh having different number of key frames while yet another animation is morphing (weights).... (there doesn't seem to be anything in the specs against this???)
while this sounds a nightmare and while each animation can animate several things at once, the component channels only animate one thing, as I understand it you apply all the animation channels one after the other and then build the matrix for the effected sub mesh.
This could be implemented on a geom shader (available since opengl 3.2) and would have the big advantage of meaning no vertices would need to be streamed to the GPU. as I understand it streaming data to the gpu is one of the major bottlenecks to performance
However
at this stage rather than worrying about animations, would in not be better to properly implement at least the surface rendering
https://github.com/KhronosGroup/glTF-Sample-Viewer/tree/master/src/shaders
I think this could be adapted if we used a bit field uniform instead of defines to switch in and out the render features per model requirement, we would then only need one (albeit largish) shader to render every loaded GLTF model - and it should look almost identical to how its displayed in other software.
It will need some light "porting" to bring it from webgl glsl to work on the desktop...
This is a fair old bit of work to achieve this..... before we even get onto the animation... to be honest its a little daunting !
@chriscamacho @raysan5 I've began work on this as a separate module rgltfanim
single header, like rlights.h
. However after reviewing the comments left by Raysan in this thread, it seems I should be able to load in all data into already existing structs..
@chriscamacho I'll push my work to my fork, on branch gltf-animations.. if you want to work on it with me ;P
What happened to this feature? I saw there was a pull request but it was closed without merge?
@raysan5 @Gamerfiend Is this going to have support in the future?
The iqm animator naively "pre-renders" vertex's for each key frame of the animation, and interpolates between them.
GLTF often supplies animations in terms of for example a rotation looking at the example here https://github.com/KhronosGroup/glTF-Sample-Models/tree/master/2.0/AnimatedTriangle you can see there are 5 key frames for this triangle this would be 44*5 bytes just for a single rotation animation of a single triangle!
additionally it is perfectly valid to have multiple animations running concurrently and its equally valid that they need not have the same number of key frames.
As I understand the problem, the only viable solution is to only have one set of vertices and translate them as per the GLTF node hierarchy, this is the only way I could see a separate translation animation channel being able to work with for example a separate rotation channel where they have a different number of key frames.
https://github.com/KhronosGroup/glTF/blob/master/specification/2.0/README.md#animations see here for an explanation of animation channels