Closed pjcozzi closed 6 years ago
Perhaps we could namespace the id -- for instance, "node.foo" instead of foo. It would indeed be nice for id to refer to other things; another example would be animated material parameters.
We should list what we can have as target
, but it is definitely not only node
.
So, answering for node, path has to be TRS but that's not the case for for other entities (of course).
An other note about node
and TRS is that, whenever a node is target of TRS animation then the node must provide initial TRS values not a matrix.
To get a broader/better answer, we should have more examples.. For instance, we need an example with animated parameters for materials. Having one with COLLADA as input would help...
And example to animate ambient would be:
"target": {
"id": "BG01",
"path": "ambient"
}
To refer to this:
"BG01": {
"instanceTechnique": {
"technique": "technique1",
"values": {
"ambient": [
0.341177,
0.470588,
0.8,
1
],
}
},
What's not ideal, is that it short-cuts some of the intermediate properties, but is it really a problem ? Otherwise we would have to handle something like this:
"target": {
"id": "BG01",
"path": "instanceTechnique.values.ambient"
}
Ignoring path
for a moment, one thing we can do with id
to reference different glTF properties, is replace id
with one of several possible properties, e.g.,
"node" : "node_id"
or
"material" : "material_id"
Then the engine doesn't have to parse the string and then switch on the string; it can just check for the glTF properties it supports.
path
is a bit harder to deal with. If we allow just
"target": {
"id": "BG01",
"path": "ambient"
}
We are assuming that instanceTechnique.values
is the only property in the material with animatable properties. Can we conclude, now and and in future versions, that each glTF top-level property (node
, material
, etc.) will only have one sub-property (which could be itself) with animatable properties? Or if it has several sub-properties with animatable properties, can we promise that their names are unique?
Perhaps, we can start with this concise version, where "ambient"
is really "instanceTechnique.values.ambient"
, and if we need to change it later, then when a namespace is not provided, it defaults to "instanceTechnique.values"
.
We can make a better decision if we know what all is animatable. Is it?
light
(e.g., color
)material
(e.g., instanceTechnique.values
)node
(e.g., TRS)techniques
(e.g., parameters
)For 1.0, I think we can get away with just material
and node
looking at the current spec; however, I haven't looked at skinning.
@pjcozzi it looks like we are aligned about the path
.
Please add also light
and camera
properties to the list of targets.
For id
, specifying node
or material
is nice to know what to look for but is it really useful ?
all of ids are unique, so at loading time it is trivial to resolve which object and figure its class... So I am not convinced here.
Also the id
version has the disadvantage that the developer has to figure out if the object own "node" or a "material" property or it could just do a loop through all keys ... to me, it feels that the implementation flow would be a bit less smooth than directly check what's under id
.
@ryu2 Thanks - I saw your comments after I replied. Indeed material animation is important. ... any example of COLLADA input with material animation would really help :).
Please add also light and camera properties to the list of targets.
Why camera? For animating the field-of-view? That is actually useful for some effects, but I'm not sure how often an engine, beyond a simple model viewer, would use it. I'm fine to include it though.
For id, specifying node or material is nice to know what to look for but is it really useful ?
I forgot that id
is globally unique, not just unique per nodes, materials, etc. Given that, we could just have id
, but there are two advantages to knowing the glTF property, e.g., node
or material
:
id
-> object
.Reminder to me for the spec: allowed values for parameter.type
will go from FLOAT
, FLOAT_VEC3
, and FLOAT_VEC4
to all glTF types once we allow a material
target.
Yes, for camera
for the field-of-view so you can animate a depth of field effect...
about id
it is the not the engine you would need to build a master list of id
-> object
but the loader if you like. I think during loading it something reasonable.
If you merge 2 scenes you may have the same ids, so usually resolving ids during loading and keep references to objects is better all done at loading time.
Otherwise it is possible too but we need to keep some "domain" associated with the scene to be able to merge scenes... a bit more complicated.
but the loader if you like
Yes, if a client is using the loader. I still think that we are forcing the client to program a certain way by having just id
, but it's reasonable all things considered.
Let's leave this open until we nail down exactly targets other than node
, then I'll do the spec work.
Note: I revised this below
Here's a proposal for animation targets.
"target": {
"id": "string of targetable glTF property id",
"path": "string of property in the target glTF property (or one of its children depending on the property"
}
The following glTF top-level properties types are targetable with id
.
camera
light
material
node
technique
The following glTF child properties are valid path
values.
camera
aspect_ratio
, yfov
, zfar
, znear
// all FLOAT
xmag
, ymag
, zfar
, znear
// all FLOAT
light
color
// FLOAT_VEC3
material
instanceTechnique. values
// Any glTF type (scalar, vector, or matrix)node
rotation
// FLOAT_VEC4
or FLOAT_VEC3
(2D)scale
// FLOAT_VEC3
or FLOAT_VEC2
(2D)translation
// FLOAT_VEC3
or FLOAT_VEC2
(2D)matrix
// FLOAT_MAT4
or FLOAT_MAT3
(2D)technique
parameters
// Any glTF type (scalar, vector, or matrix)On camera, if you want animated depth of field as @fabrobinet mentioned, you'll need a bit more info so that CoC can be calculated.
Given more experience, here is a revised proposal. I still want to go over this more with our artists though.
"target": {
"id": "string of targetable glTF property id",
"path": "string of property in the target glTF property (or one of its children depending on the property"
}
The following glTF top-level properties types are targetable with id
.
node
camera
material
technique
light
path
must point to a property of type:
FLOAT
, FLOAT_VEC2
, FLOAT_VEC3
, or FLOAT_VEC4
.INT
, INT_VEC2
, INT_VEC3
, or INT_VEC4
.This means that boolean and matrix types are not targetable nor are texture parameters (strings).
Given the id
, path
may reference properties in the following child properties. (We can explicitly list the properties in the spec, but it should also include user-defined properties in the extra
tag).
node
: any property with a compatible type.camera
: any property with a compatible type.material
: any property in instanceTechnique.values
with a compatible type.technique
: any property in parameters
with a compatible type.
technique
not being targetable, and instead, parameters need to be animated through a material
. However, this means that instead of being able to animate a parameter once in a technique, it will need to be animated for each material.light
: TBAQuestion: in order to create a spline we need to know the type of the targeted property, but how will we know if, for example, a node's translation is FLOAT_VEC3
(3D) or FLOAT_VEC2
(2D)? This is more general than animation.
Thanks @pjcozzi, I'll try to answer this during this week, but now I am converging for new converter update.
First quick question why not supporting animated matrix parameters ? to prevent decomposition on client ? we should support animation of matrices, for example it is quite common to create FX based on texture matrices animations.
@meshula for the CoC, we have in camera the [x/y]fov
, znear
and zfar
so the rest of parameter (plan to focus, focus distance..) would have to provided/computed at runtime no ? I will also keep the CoC case in mind when designing multi-pass, the technique to implement it would have to declare more parameters for sure, but just for the camera (correct me if I am wrong) I think we are ok.
First quick question why not supporting animated matrix parameters ? to prevent decomposition on client ? we should support animation of matrices, for example it is quite common to create FX based on texture matrices animations.
I did consider the texture coordinate animation case. If we make matrices a supported type, then we have to ask why we allow TRS in a node
since matrix
would also be targetable. If a client needs to decompose matrices to create a full animation implementation, why have two code paths? I'd rather stay in the TRS
camp. How often does texture coordinate animation need a full matrix compared to, for example, just translation? I think we are opening up a whole lot of "flexible" with matrices, which will manifest itself as a needlessly complicated client. For example, do we always decompose a 4x4 matrix? It could be anything, not a transform. We'd need semantics for this.
Yes, I agree, that's why I was asking asking about decomposition.... and wether it is using matrices or TRS we need an answer for this use case.
TRS
property and then refer to translation
, position
or scale
inside it. To be consistent we should update TRS inside node too.thinking a bit more about this... maybe TRS
object is not useful here.
but there is another issue. For transform, we have a semantic like "MODELVIEW" so that the clients knows explicetely that he need to build a matrix from TRS. But in this case, e.g. for generic parameters, how could we end up with matrix in the shader. We may not want to have to re-create a matrix in the shader from TRS...
@pjcozzi @tparisi and I agreed to propose input for this during monday call, one point being to animate transforms and provide a matrix to a shader.
A proposal the matrices parameters in shader: we can do exactly as we do for nodes / transforms and animations.
We would have a parameter that is a say a FLOAT_MAT4
but in order to be "animatable" just like node should have translation
, rotation
or scale
but in this case instead of providing direct the value we would refer another parameter.
"textureMatrix": {
translation: "textureMatrix_translate",
type: FLOAT_MAT4 --> (from glEnum..)
}
"textureMatrix_translate": {
type: FLOAT_VEC3
}
I'll dig it a little bit more and propose a complete example integrating animation
Given that this is not a breaking change, isn't widely used AFAIK, and I don't think we would have time to implement it, I suggest we push this from 1.0.
Events, data and color binding and animation. Strike out animation from the spec and show me how your events, color and data binding are better than D3 + X3DOM???
@carlsonsolutiondesign I am not following. We would not remove animation from the spec. My suggestion is to keep animation targets as they are currently scoped (node transforms) and then widen the scope (e.g., to target material parameters) in an upcoming glTF version when we have enough time to implement it and make sure the design is solid.
@tparisi I still suggest we go this route.
@pjcozzi I'm not sure what the issue is here any more. Maybe close this and open others e.g. 1) add the ability to animate arbitrary properties and 2) add texture matrices as properties that could be animated.
I'm going to leave this open, but drop the 1.0
label since it has a lot of useful discussion. When we revisit, we can divide it into two issues if needed.
My apologies. I googled glTF animations and I thought this was the word on animations for 1.0. node animations are likely enough.
How do you do animations? Is it in the JSON or otherwise?
John On Sep 16, 2015, at 12:56 PM, Patrick Cozzi notifications@github.com<mailto:notifications@github.com> wrote:
@carlsonsolutiondesignhttps://github.com/carlsonsolutiondesign I am not following. We would not remove animation from the spec. My suggestion is to keep animation targets as they are currently scoped (node transforms) and then widen the scope (e.g., to target material parameters) in an upcoming glTF version when we have enough time to implement it and make sure the design is solid.
@tparisihttps://github.com/tparisi I still suggest we go this route.
— Reply to this email directly or view it on GitHubhttps://github.com/KhronosGroup/glTF/issues/142#issuecomment-140821529.
Let me be plainer about my goals:
I still think data-driven and sensor driven animation is important. Can someone who is familiar with the giTF spec comment on data-driven animation? Or at least, animation of data provided to glTF from outside glTF? That is, can I provide different data sets for the same animation? How does this work? Do I modify the JSON and send it back through the pipeline, or do I keep the JSON the same and just modify the live data? Where is the live data stored? What are the JSPath or JSONPath solutions in this area?
What’s the best way to combine 3D and multidimensional data for a final visualization? Time to look into cesiumjs I guess.
John On Sep 16, 2015, at 1:22 PM, John Carlson john@carlsonsolutiondesign.com<mailto:john@carlsonsolutiondesign.com> wrote:
My apologies. I googled glTF animations and I thought this was the word on animations for 1.0. node animations are likely enough.
How do you do animations? Is it in the JSON or otherwise?
John On Sep 16, 2015, at 12:56 PM, Patrick Cozzi notifications@github.com<mailto:notifications@github.com> wrote:
@carlsonsolutiondesignhttps://github.com/carlsonsolutiondesign I am not following. We would not remove animation from the spec. My suggestion is to keep animation targets as they are currently scoped (node transforms) and then widen the scope (e.g., to target material parameters) in an upcoming glTF version when we have enough time to implement it and make sure the design is solid.
@tparisihttps://github.com/tparisi I still suggest we go this route.
— Reply to this email directly or view it on GitHubhttps://github.com/KhronosGroup/glTF/issues/142#issuecomment-140821529.
@carlsonsolutiondesign animations are stored in JSON, and the keyframes are stored in binary. There's an example here (search for "animations"): https://raw.githubusercontent.com/AnalyticalGraphicsInc/cesium/master/Apps/SampleData/models/CesiumMilkTruck/CesiumMilkTruck.gltf
glTF animations are key-frame animations stored as part of the glTF. An engine could modify/add/remove these at runtime, but, at least in Cesium, that use case is handled by giving users direct access to the transform for each node so they can modify it. This Cesium example may also be of interest to you.
If you have more Cesium questions, please post them on the Cesium forum so we can keep this repo focused on glTF.
Replaced by #1191 for easier tracking.
This is the first in a series of questions (over the next week or so) to help me nail down my glTF animation implementation and properly word the spec.
In our current design, given a channel's target, e.g.,
Am I correct that:
id
can only reference anode
?path
must be"translation"
,"rotation"
, or"scale"
?I am OK with only having TRS animations for glTF 1.0, but are we confident that we won't need breaking changes to make this support
instanceTechnique
parameters for example?