Closed tparisi closed 7 years ago
Thanks @tparisi
MORPH
always be scalar, e.g., INT
or FLOAT
? If so, why is weights
in BallUV3-morph
an array? Isn't it just the default weight?BallUV3-morph
, what is targets
? I don't see target-BallUV3Custom_Morph
anywhere else in the example.NORMALIZED
and RELATIVE
, but couldn't we always do NORMALIZED
and convert RELATIVE
in the converter?I can't really answer fully this until I got preliminary support in the converter. But at least for shaders, yes, you can can expect the blending to happen in the vertex shaders, at least that's how I imagine it so far (and combined with skinning).
I will try check OpenCOLLADA support wrt morphing this week and provide some time estimates.
@fabrobinet post 1.0?
yes post 1.0
agreed
On Tue, Nov 25, 2014 at 9:42 AM, Fabrice Robinet notifications@github.com wrote:
yes post 1.0
— Reply to this email directly or view it on GitHub https://github.com/KhronosGroup/glTF/issues/210#issuecomment-64440201.
Support the VR Revolution: check out my DIYVR Kickstarter at https://www.kickstarter.com/projects/dodocase/diy-virtual-reality-open-source-future
Tony Parisi tparisi@gmail.com CTO at Large 415.902.8002 Skype auradeluxe Follow me on Twitter! http://twitter.com/auradeluxe Read my blog at http://www.tonyparisi.com/ Learn WebGL http://learningwebgl.com/
Read my books!
Programming 3D Applications in HTML5 and WebGLhttp://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966 http://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966WebGL, Up and Running http://www.amazon.com/dp/144932357X
@pjcozzi I assume morphs and non-linear interpolation are still after v1.0? the schema implies that; so, wondering why you removed the "post 1.0" tag....
Yes, still post 1.0. There is no post 1.0 tag anymore. Everything not 1.0 is post 1.0. We'll prioritize after we get the spec out.
great. thanks
On Tue, Sep 22, 2015 at 11:21 AM, Patrick Cozzi notifications@github.com wrote:
Yes, still post 1.0. There is no post 1.0 tag anymore. Everything not 1.0 is post 1.0. We'll prioritize after we get the spec out.
— Reply to this email directly or view it on GitHub https://github.com/KhronosGroup/glTF/issues/210#issuecomment-142371051.
Tony Parisi tparisi@gmail.com Follow me on Twitter! http://twitter.com/auradeluxe Read my blog at http://www.tonyparisi.com/ Learn WebGL http://learningwebgl.com/ Mobile 415.902.8002 Skype auradeluxe
Read my books! Learning Virtual Reality http://www.amazon.com/Learning-Virtual-Reality-Experiences-Applications/dp/1491922834
Programming 3D Applications in HTML5 and WebGLhttp://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966 http://www.amazon.com/Programming-Applications-HTML5-WebGL-Visualization/dp/1449362966WebGL, Up and Running http://www.amazon.com/dp/144932357X
- If so, why is
weights
inBallUV3-morph
an array? Isn't it just the default weight?
Yes, always scalar. I think the assumption is that you may have multiple target geometries in the targets
array. You'd then have a default weight for each in the weights
array.
- In
BallUV3-morph
, what istargets
?
A reference to one or many geometries that are blended with the base mesh in a weighted fashion, right?
- ...couldn't we always do
NORMALIZED
...
I agree that supporting only the NORMALIZED
method of computing the final shape should be sufficient for glTF. It made sense to offer both in COLLADA, but I don't think it's necessary here.
The only advantage I see in keeping the RELATIVE mode around is in more efficient transmission (displacement targets have lots of zeros so they compress well). Sparse storage ( #820 ) is an alternative way to efficiently encode morph targets.
A look from implementation side (WebGL 1.0 (ES 2.0) caps only):
We've got 16 guaranteed vertex attributes. Positions, normals, tangents, UV_0&1 (could be packed), skin weights, skin joints - took at least 6 (if we calculate bi-tangents in shaders).
Remaining 10 could be spent on 5 morph targets (positions + normals) with weights controlled by uniforms.
So, runtime should bind needed morph targets (no more than 5), and animate/blend them via uniform updates.
Is that correct?
Yes, thank you for bringing up the implementation conversation. Your numbers are sound and match this article. A quick look at Three.js sources, seems to point out that they choose to only animate 4 morph at a time (8 when no normals).
With no asset-provided shaders, we need to agree on one particular layout (e.g. 4/4) or introduce more parameters.
There's more modern approach based on transform feedback. This should be supported with WebGL 2.0 (ES 3.0).
Should we design morph targets with that in mind?
Also, to use morph targets, we need to bind them with animations and that could be tricky, because each keyframe must contain either all possible targets (and engine will bind top 4-5 targets with non-zero weight), or explicitly bind each target each keyframe. One more option is to forbid mixing more than 4-5 targets in one animation channel.
Yes, I believe we should design morph targets with more than one API in mind (WebGL1, 2 and more advanced APIs as well). I consider glTF as an API-independent way to transfer 3d assets for visualization. (note: 4-5 active targets are probably ok in some cases, but I worked with way more than that). Rather than picking a specific API and designing how to best feed it I believe we want to consider multiple possible implementations and make sure that we can enable them. A general approach for that would be to keep the format simple and avoid baking low level implementation decisions in the format. Rather let's have the rendering engine make those decisions and scale back when appropriate. For instance, if the current environment only supports WebGL1.0 the rendering engine will have to sort out the 4-5 "most active" targets at a given frame and animate with those(as you suggested). If the current environment supports more advanced APIs we could enable a better experience, although performance could be another reason to scale down (perhaps 4 active targets are still ok for a characters in the background...and cheaper to compute). All of this to say: let's keep multiple implementations in mind, and let's move decisions to the rendering engine when possible (rather than baking them in the format). Do you agree?
we want to consider multiple possible implementations and make sure that we can enable them
Of course, we do. So, let's go through different morph workloads:
WebGL 1.0
When overall per-model morph targets count is 5 or less, WebGL 1.0 engines can unpack sparse arrays to full-length vertex attributes and do morphing in GLSL. No additional per-frame sorting is needed.
When overall per-model morph targets count is more than 5, but active targets count is 5 or less, WebGL 1.0 engines can unpack sparse arrays to full-length vertex attributes, select most important targets, bind them and do morphing in GLSL. Such per-frame CPU sorting doesn't look good to me, so maybe we could add some hints into asset.
When active per-model morph targets count is more than 5, things become more expensive for WebGL 1.0 engines. One approach is to generate vertexID
attribute, store sparse morph data in textures (won't work everywhere, some GPUs don't support vertex texture units), process morphing in GLSL using texture look-ups.
There could also be a fully-CPU fallback with per-frame vertex buffer updates (probably the worst approach).
WebGL 2.0 and beyond
It seems to me that there could be a noticeable performance gap in handling complex morph/skin animations when running on different runtimes.
Yes, performance and/or experience gap. Thank you for the overview above, it provides a valuable background to this conversation. Now one questions for you: what can we do, at the level of glTF specification, to help engines scale their morph target implementation depending on the supported APIs / HW capabilities / Scene complexity & layout?
First thoughts about glTF side, maybe not 100% correct.
It should be clear how many morph targets a mesh has, what is the maximum value of active targets count. Animations should specify used/unused targets, so runtime could skip manual sorting.
@lexaknyazev those both sound good to me...
How many morph targets a mesh has->that is provided by the size of the morph::targets field correct? Maximum number of active targets->we can add another entry to the morph structure, sounds good? "Animations should specify used/unused targets". You mean on a frame-by-frame basis or for the entire animation (i.e. "don't bother checking this morph target channel it is always 0").
How many morph targets a mesh has->that is provided by the size of the morph::targets field correct? Maximum number of active targets->we can add another entry to the morph structure, sounds good?
Both yes.
"Animations should specify used/unused targets". You mean on a frame-by-frame basis or for the entire animation (i.e. "don't bother checking this morph target channel it is always 0").
We need to carefully review current animation system from the perspective of performance for skinned/morphed/animated mesh. There were concerns about non-fixed framerate (so implementations need to build cache on load or search proper keyframe too often).
I'm not sure that using full-featured mesh
objects as targets is a good idea - since morphed variants are tied to original mesh, maybe it would be better to directly override accessors. Also as a general rule, I'd not introduce new global space objects if they can't be reused (i.e. could the same target meshes be used with different base mesh? If not, why put them in global space?).
There's also not quite resolved issue of "instances" of skin-based animation (e.g., since animations target nodes, we need to duplicate skeleton nodes to have different animations targeting different instances of the same mesh). We could got similar issues with morphed meshes.
Also see https://github.com/KhronosGroup/glTF/issues/821 and https://github.com/KhronosGroup/glTF/issues/723.
So, if I understood correctly, what you are suggesting is, rather than expressing targets as global meshes (and have the morph blend among their attributes). Express targets as a list of overrides. Something along the line of this:
"BallUV3-morph": {
“method” : “NORMALIZED”,
“source”: "BallUV3_geometry",
"targets": [
{
name: "target-BallUV3Custom_Morph",
primitivesOverrides:[{ //same size as the source mesh primitives array
attribute: {
"POSITION": "accessor_target_1", //this target is only overriding the position attribute
}
}],
"defaultWeight": 0
}]
}
For instancing multiple animated morph targets within a single node we can add a level of indirection. node::instancedmorphs = { "myMorphInstance1":"myFirstMorph", "myMorphInstance2":"myFirstMorph" }. Then in your animation you specify "path": "instancedmorphs/myMorphInstance1" to select the morph you want to animate. (this doesn't solve the situation when you have both skinning and morph targets)
Regarding updated targets
scheme
mesh
could have many primitive
s, it's not clear which one is overridden by morph. Related #821.weights
array matches length of targets
array, so we could store weights inside.More on performance/experience gap
The "baseline" morphing level should limit the maximum number of active per-frame targets to 4, so they could be animated via one vec4
uniform update. Anything beyond that should require more capable hardware (UBOs).
Regarding animation/target specifying
You mean on a frame-by-frame basis or for the entire animation (i.e. "don't bother checking this morph target channel it is always 0").
Can't say much about exact layout yet, but the goal should be to minimize engine burden. Will it be OK to state that each animation channel uses some specific set of morph targets (less than 5), and different animation channels of the same animation can't overlap (time-wise)?
Regarding animation/skinning/morphing instances Animation system is meant to be a "library" of possible animations - not a predefined timeline script. So we don't have animation loops, animation blending, animation dependencies, animation auto-start, etc.
One could look at current animation.target -> node
link as a "class" and instantiate actual animated skin/morph in engine (by creating a new instance of skeleton tree and/or morph targets). In that case we should have two distinct types of nodes: nodes, which are used in actual scene graph and nodes which are used for instantiating animations/skins.
With such setup, the whole issue of instantiating skinned/morphed meshes should be resolved by application for now, while we could later define a spec extension/revision for most common use cases.
@emilian0 when this discussion converges - or reaches some level of critical mass - could you please post a complete-ish JSON example like you did in #820? I would like to do a quick review, but don't have the bandwidth right now to help develop the schema itself.
@tparisi I would like your take on moving away from COLLADA’s style of specifying morph targets as full meshes and instead encode a list of overridden attributes. This was a suggestion from Alexey, I support it and I formalized it 4 posts above this one. Thanks!
@lexaknyazev I integrated your comments to the "targets" schema above, thank you! I am not sure about limiting the number of active targets to 4. 4 active blend shapes aren't very much and with WebGL 2.0 on its way I would rather not bake this constraint in glTF. In fact I would leave it up to the engine to decide how many active targets to use. As mentioned above, available API is only one factor in this decision, you also want to know how close is the morphed mesh to the camera, the current framerate and other runtime information. What do you think?
@emilian0 I'm all for it. Let's come up with a more compact encoding! Yours looks good but honestly I'm not too worried about the details... do what you think is best.
Have you check those Collada KHR extensions? https://www.khronos.org/collada/wiki/Khronos_extensions
@RemiArnaud . Issue #820 derives from one of the COLLADA extensions you pointed out. I don't think the morph weight extension is helpful to us, what do you think? Thanks
not directly applicable, but food for thoughts for morph weights animation in gltf.
@pjcozzi @lexaknyazev . This is where we are on morph targets. Please let me know what you think.
Morph targets are defined from a source
base mesh and an array of targets
.
Each target stores a set of primitives overriding the corresponding primitives in the base mesh. Each target has a default weight
associated to it.
The morph target deformation is defined by combining the source mesh primitives with the targets primitives based on the weights of the targets and on the morph method
(supported methods are NORMAL
or RELATIVE
).
A node can instantiate either a mesh or a morph. The morph target deformation is resolved before skinning is applied. For the purpose of skinning a morph target is therefore handled in the same way a mesh is. The skinning weights for a morph target are defined in the morph target source
mesh.
Morph targets are required to list overrides for all primitives in the source mesh, primitives overrides should be listed in the same order as they appear in the source mesh. Only two attributes of a source mesh primitive can be overridden by the morph targets: NORMAL
and POSITION
. Given a source mesh primitive, all morph targets are required to override the same attributes.
Here a sample JSON defining a morph target:
{
"morphs" : {
"morph_id": {
"name": "user-defined name of morph",
"source": "source_base_mesh_id",
"method": "morph blending strategy",
"targets": [
{
"name": "morph target1",
"weight": 0,
"primitivesOverride": [
{
"NORMAL": "accessor_id",
"POSITION": "accessor_id"
}
]
},
{
"name": "morph target2",
"weight": 0.5,
"primitivesOverride": [
{
"NORMAL": "accessor_id",
"POSITION": "accessor_id"
}
]
}
],
"extensions" : {
"extension_name" : {
"extension specific" : "value"
}
},
"extras" : {
"Application specific" : "The extra object can contain any properties."
}
}
}
}
Instantiating morph on a node together with skinning:
"luckyNode": {
"children": [],
"morphInstance": {
"morph" : "morph_id",
"weights": [],
"activeTargets":[]
},
"skeletons": [],
"skin": "skin_1"
}
morphInstance
is used to instantiate a morph target. A node can only instantiate either a mesh or a morph target. Since morph targets might be instantiate multiple times in a scene using different blending weights I am suggesting to add a weights
field under morphInstance
(weights should be a property of the instantiation rather than a property of the morph target).
weights
is an array of floats of the same length as the instanced morph target targets
array.
The (optional) activeTargets
array lists the indices of the most active targets (larges weights). It can be used by the engine to select the targets to bind to the vertex shader. (This should be the same as running quickSelect on the weight array… which in my opinion should be fast enough).
Animating a morph.
Right now, it looks like animations only output vec3, that will have to change to support animating morph targets. Also I suggest to generalize the target::path
field, so that it can point to morphInstance/weights
on top of rotation
, scale
and translation
(we can make morphInstance/activeTargets
animatable as well).
Sample of animation of both skin and morph:
"animations": [
{
"name": "Animate all properties of one node with different samplers",
"channels": [
{
"sampler": 0,
"target": {
"id": 1,
"path": "rotation"
}
},
{
"sampler": 1,
"target": {
"id": 2,
"path": "rotation"
}
},
{
"sampler": 2,
"target": {
"id": 1,
"path": "morphInstance/weights"
}
}
]
}]
Great! Comments below.
supported methods are
NORMAL
orRELATIVE
Do we need both? This could require different shader versions or branching.
A node can instantiate either a mesh or a morph.
Alternative approach could be to keep both properties in node
, and remove morph.source
. In such case, clients without morphing support will show initial mesh instead of nothing.
morphInstance.weights
defines initial morphed state, right? Starting animation of such mesh could lead to "jump" on the first frame, if animation contains different initial weights. That should be fine, but worth noting, imho (e.g., users may want to sync initial weights with animations).
The (optional)
activeTargets
array lists the indices of the most active targets (larges weights). It can be used by the engine to select the targets to bind to the vertex shader.
I don't think that is needed for initial weights array. On the other hand, I'd like to know most influential targets during animation.
it looks like animations only output vec3...
animation.sampler
uses accessors to get per-frame data. Supported types are vec3 (for positions and scale) and vec4 (for rotations).
How weights should be stored (it's non-trivial to store more than 4 floats per frame with current scheme)?
Do we need variable weights count per frame? Packing could be inevitable in that case.
Anyway, we need to know both target index and its weight to not store lots of zeros.
Most glTF fields aren't as verbal as primitivesOverride
or morphInstance
. Would it be reasonable to shorten them to one word? @pjcozzi
Thanks for the feedback @lexaknyazev . Here my answers:
morph
structure no longer self-contained and I would like to avoid that, would you be fine with keeping the morph.source
property? Then, as you suggested, we can say that, a client not understanding morphs will instantiate the mesh, whereas a client understanding morphs will "overrule" the mesh field and just instantiate the morph. morphInstance.weights
and morphInstance.activeTargets
both optional. Then we recommend not to use these properties in case a morph target is animated. morphInstance.weights
simultaneously (so an array of size # targets shapes). For a total memory occupancy of # frames * # targets (floats) per animation. I would expect ~2/3 of this data to be zeroes, this said I wouldn't expect morph animations to be too memory intensive. Say you have ~16 blend shapes, it is 16 floats a frame, same as what is required to animate the head bones (eyes, neck, jaw). This is very indicative and depends on rigs and animation requirements. Let's chat more about this during our weekly call.Most glTF fields aren't as verbal as primitivesOverride or morphInstance. Would it be reasonable to shorten them to one word? @pjcozzi
Sounds good, name suggestions?
@emilian0 really nice start here. A few basic questions comments:
morphs
should be an array, node.morphInstance.morph
should be a number, source
should be a number, etc.target
need a name
? Will this be useful to UIs? Usually only two-level objects have name
properties.primitivesOverride
an array? Because a mesh has an array of primitives?Only two attributes of a source mesh primitive can be overridden by the morph targets: NORMAL and POSITION
Are texture coordinates ever changed during a morph? What if the model has per-vertex color, temperature, etc. - are we making this too restriction without any real benefit?
thanks @pjcozzi
primitivesOverride
is an array because it describes the overridden attributes for each primitivethe only reason why targets should have a name is for editing purpose (if this information needs to be made available to artists). Since glTF is a runtime format more than an editing one I am not 100% sure we need them. Perhaps leaving this as an optional property?
In glTF, name
is always optional. I'm asking if it is truly valuable here? In general, we do not clutter glTF with extra fields that might be useful; instead, application-specific properties can be added to extras
object properties.
the reason why I added this limitation is only to simplify runtime implementation. When implementing morph targets I only worked on NORMALS and POSITIONS (bi-normals are computed). I have never dealt with morph targets encoding colors or shifting UVs... this said I see this might be useful.
Is your experience reasonably representative of what folks do with morph targets? If so, let's keep the limitation and we could relax it in a future version if there are use cases.
Why do we have morph.target.weight
property? Can't initial instance state be defined by node.morph
field?
Is it reasonable to use the same weight for all mesh primitives? Could only one primitive of the mesh have morph targets?
Let's keep only RELATIVE mode. It implies fewer runtime computations. Transform feedback-based implementation would be also simpler than with NORMALIZED.
On morph's self-containedness:
node.mesh
field for mesh instantiation (both morphed and not-morphed) makes implementation simpler and allows "gradual" development and debugging.node.mesh
/morph.source
) to not add more conformance/validation rules.morph.source
is connected to morph.targets
:
morph.source
with the same set of targets (that would require new morph
object anyway);morph
objects based on the same mesh
to use different sets of targets on different mesh instances thus reduce runtime processing.It looks like morph.targets
is mostly mesh-related property, while everything related to weights is bound to "mesh instance", i.e., node. What do you think of extending mesh.primitive
object and putting targets data there (like accessor
was extended with sparse data)?
@pjcozzi re: "is target.name is truly valuable? ". I see little to no value in having specifying targets name for a run-time (not editing) format. So it makes sense for me to move it to an extension. Sounds good?
Instead of moving name
to an extension, just remove it. Applications could put something like this in an extras
property if they need.
@pjcozzi re: "blending vertex colors or UVs". I am checking with our artists what is their experience with morph targets blending UVs or vertex colors. I have none. I see how that can be useful. But I also see how this can make implementation (expecially on Webgl 1.0) much harder ( @lexaknyazev feel free to chime in). So, unless I hear something from our artists (or anyone objects), I suggest to only support POSITION and NORMALS for now. Sounds good?
@lexaknyazev : The idea is to have morph.target.weight
set the default weights of a given morph.
node.morph.weights
instead overrides that when instancing the morph. This way you can:
morph.target.weight
)node.morph.weights
Is this ok with you @lexaknyazev ? Thanks!@lexaknyazev I am pleased we agreed on the RELATIVE mode only!
@lexaknyazev I was thinking the same: extend the concept of mesh to make it include the case of morphable meshes. This is quite a change, I will spec it out and ping all of you for an additional pass.
unless I hear something from our artists (or anyone objects), I suggest to only support POSITION and NORMALS for now. Sounds good?
Yes, thanks!
@tparisi @pjcozzi @lexaknyazev, here the update on morph targets. Please take a look and let me know what you think. Unfortunately we don't have many iteration cycles left before tomorrow. I believe the only "invasive" change that I am suggesting is regarding the animation of morph targets. Please let me know what you think and if you have better ideas.
Morph Targets are defined in glTF 2.0 as an extension to the Mesh concept.
A Morph Target is a deformable mesh where primitives' attributes are obtained by adding the original attributes to a weighted sum of targets attributes (this operation corresponds to COLLADA's RELATIVE
morph targets blending method).
The targets
property of primitives
is an array of targets, each target is a dictionary mapping a primitive attributes to a target displacements, currently only two attributes ('POSITION' and 'NORMAL') are supported. The size of the targets
array is the same for all primitives and matches the size of the weights
array. All primitives are required to list morph targets in the same order.
The weights
array is optional and stores the default weight associated to each target, in the absence of animations the primitives attributes are resolved using these weights. When this property is absent the default targets weights are assumed to be zero.
Here a sample JSON defining a morph target:
{
"meshes": [
{
"primitives": [
{
"attributes": {
"NORMAL": 25,
"POSITION": 23,
"TEXCOORD_0": 27
},
"indices": 21,
"material": 3,
"mode": 4,
"targets": [
{
"NORMAL": 35,
"POSITION": 33,
},
{
"NORMAL": 45,
"POSITION": 43,
},
]
}
]
}
]
}
Instantiating a morph on a node together with skinning:
{
"mesh": 1,
"skeletons": [21],
"skin": 0,
"weights":[0.0 0.5],
"targets":[0 1]
}
The (optional) weights
array is only valid when the instantiated mesh is a morph target. This array specifies the weights of the instantiated morph target, it has therefore the same size as the weights
array of the referenced morph target.
The (optional) activeTargets
array lists the indices of the most active targets (largest weights). It can be used by the engine to select the targets to bind to the vertex shader. (This should be the same as running quickSelect on the weight array… which in my opinion should be fast enough).
Animating a morph.
Animation needs to be extended to support arbitrary sized output vectors (not only vec3/vec4, 4 active morph targets is not an acceptable limitation).
One way to do that is to add additional vector types such as VECx
. Is this reasonable to you? Better ideas?
Sample of animation of both skin and morph:
"animations": [
{
"name": "Animate all properties of one node with different samplers",
"channels": [
{
"sampler": 0,
"target": {
"id": 1,
"path": "rotation"
}
},
{
"sampler": 1,
"target": {
"id": 2,
"path": "translation"
}
},
{
"sampler": 2,
"target": {
"id": 1,
"path": "weights"
}
},
{
"sampler": 3,
"target": {
"id": 1,
"path": "activeTargets"
}
}
]
}]
currently only two attributes ('POSITION' and 'NORMAL') are supported
How tangent-space should be reconstructed for normal maps to work with morphed mesh? We must provide exact math there.
Here's relevant excerpt from GPU Gems:
We ultimately chose to have our vertex shader apply five blend shapes that modified the position and normal. The vertex shader would then orthonormalize the neutral tangent against the new normal (that is, subtract the collinear elements of the new normal from the neutral tangent and then normalize) and take the cross product for the binormal.
All primitives are required to list morph targets in the same order.
Maybe, clarify also that all primitives must have the same number of targets (do they?).
One way to do that is to add additional vector types such as
VECx
Since reading and sorting will be done on CPU, we can leave them SCALAR
and specify data layout depending on the number of targets.
4 active morph targets is not an acceptable limitation
I understand that, but it's almost inevitable with WebGL 1.0 (three.js supports 8 targets if they contain only positions, no normals).
With WebGL 2.0, engines could use iterative approach via Transform Feedback and apply targets in batches of 4 (e.g., 12 active targets need 3 passes).
Only with OpenGL ES 3.1+ (no web equivalent yet), it will be possible to access buffer data directly from shader and apply any number of targets in one pass.
@lexaknyazev agree on the first two points.
regarding
Since reading and sorting will be done on CPU, we can leave them
SCALAR
and specify data layout depending on the number of targets.That is ok, it is a little of work for the runtime though, since that prevents decoupling between animations and morph target (it can't blend the animation curves before it knows what morph target they are for). Anyway this simplifies a lot the format, I am on board with this.
Considering these changes should I go ahead and make a pull request?
@lexaknyazev I would like to leave tangent/binormal computation out of the draft and up for discussion. It seems like different shaders implementers use different techniques to compute them and I don't see a reason to pick the one above (aside from the fact that it was published on GPU gems). sounds good?
Here is an example of the basic morph syntax I am proposing.
First, the animation, consisting of a channel to drive the morph from TIME input and MORPH output parameters.
Now, the morph controller. NORMALIZED and RELATIVE methods are taken directly from the COLLADA spec. Do we need these? Default is NORMALIZED. In this example the single morph target has zero weight i.e. at resting position the geometry is un-morphed.
"morphs": { "BallUV3-morph": { “method” : “NORMALIZED”, // one of “NORMALIZED” or “RELATIVE” “source”: "BallUV3_geometry", "targets": [ "target-BallUV3Custom_Morph" ], “weights”: [ 0 ] } },
Finally, the instance of the morph (will appear inside a node):