EGjoni / Everything-Will-Be-IK

A Robust Inverse Kinematics Library
MIT License
42 stars 4 forks source link

[TRACKER] Port to C++ for Godot Engine #8

Open fire opened 4 years ago

fire commented 4 years ago

I'm trying to scope out what needs to be ported for the C++ game engine Godot Engine.

The goal is to have realistic IK tracking given three sensors (head-mounted display, two trackers) and at least 3 - 10 VR tracker sensor positions.

My peers had some concerns about FABRIK's unnaturalness around the bending of the elbows, given only three sensor points.

The plan currently is to have analytic solvers for the IK system, but it doesn't have sound theory behind it.

5 mentioned some calibration needed for scaling the body. We could use a "T" or "A" pose for doing that. Other systems like VRCHAT or HI FIDELITY used the height of the head-mounted display and the hips to give a possible scale. Scaled avatars like huge persons or small persons are a problem.

EGjoni commented 4 years ago

The plan currently is to have analytic solvers for the IK system, but it doesn't have sound theory behind it.

This plan is almost certainly not going to work.

Make a visualizer for kusudama constraints

The Kusudama visualizer in the youtube link from my reply in #7 is primarily shader-based, if you'd like, I can share the code for that. Rolling your own visualizer would require understanding the math behind Kusudamas, which might not be entirely obvious to infer from the code Java code (and I've yet to write a paper on them).

My peers had some concerns about FABRIK's unnaturalness around the bending of the elbows, given only three sensor points.

Your peer's concerns about "Unnaturalness" are well founded, but they aren't a limitation of FABRIK per se. They are more a problem resulting from a combination of the following three factors.

  1. Most IK systems (FABRIK included) don't account for the "twist" component of their end effectors. (If you touch the back of your right hand to your cheek [so your thumb points toward your nose] then rotate your hand around the direction of your middle finger by 360 degrees [so your thumb points toward your nose again], note the hug difference in where your shoulder and elbow had to be in the first pose vs the second pose)
  2. Poor constraint specification.
  3. The tools of Robotics map poorly to the realities of Biology.

This library currently accounts for factor 1 (the solver can take the orientation of effectors into account -- though, at some cost to speed and convergence stability in edge cases) and factor 2 (via Kusudama constraints).

For your particular use case, factor 3 is both the most important and I think the most trivial to handle. If done correctly, a solution to 3 would (for your use case) almost completely obviate concerns regarding 1 and 2.

Depending on whether you're using this library as a reference, or developing a full port, I can either implement a solution to 3 in this library, or offer some suggestions as to a viable approach, and let you figure out the details as relevant to your specific implementation. But the very broad idea is:

  1. Define a set of (preferably extreme) pre-computed poses.
  2. Determine the n-poses in that set most closely matching the user's pose.
  3. Take either a weighted average or the closest of those pre-computed poses, and run the solver using those pre-computed poses as the starting position, and the sensors as the target position.
fire commented 4 years ago

Thank you for your detailed reply.

If you could implement this in this library, I could reference how it is implemented. The shader code would be useful too.

fire commented 4 years ago

I've done some cleanup of the repository, and removed the binary library from the repository.

There's also some updates to the readme.

https://github.com/fire/Everything-WIll-Be-IK

EGjoni commented 4 years ago

I've just updated the library with a massively improved algorithm. As far as I can tell, it sort of "just works" and probably won't require any of the preprocessing suggestions above (check the video in the readme). I've also removed all of the old solvers in favor of this much better solver.

If you're still porting this, let me know if you need any help.

fire commented 4 years ago

I am still porting, but I was distracted by other tasks. I'll have to review the code.

fire commented 4 years ago

Is ConstraintExample_Kusudama.pde compatible with the new api?

fire commented 4 years ago

You mentioned you could provide help with the cone based constraint shader. I'm having trouble understanding that approach.

fire commented 4 years ago

My goal is to create a sample with the three or more sensors to affect a skeleton I've defined in a c++ engine.

This requires visualization and editing of constraints.

EGjoni commented 4 years ago

You mentioned you could provide help with the cone based constraint shader. I'm having trouble understanding that approach.

I'm away from my dev machine at the moment, but I'll be home tonight and will upload the shader then.

My goal is to create a sample with the three or more sensors to affect a skeleton I've defined in a c++ engine.

This requires visualization and editing of constraints.

In that case, I'll also update the Processing extension of the library available at https://github.com/EGjoni/Everything-Will-Be-IK-Processing. The latest version of which will include a visualizer for Kusudamas (and a number of other useful examples for generally getting the most out of the library). If you're willing to put in a little extra work to port the JSON based armature Saver and Loader, it will likely help you quite a bit in ensuring the armature you create behaves the same way in your C++ port as it does in Java.

Anyway, if you'd like to have something to chew on in the meantime, the basic idea behind Kusudamas is this:

https://i.imgur.com/ERuipqk.png

Except the regions are on a sphere (so the straight lines end up being curves).

In effect, this means you can add a sequence of LimitCones, and the kusudama will treat them as specifying a "path" of allowable space on the sphere that gracefully encompasses the region between one cone and the next. It also means that if you specify just one cone, a kusudama behaves identically to traditional cone-constrained ball-and-socket joints.

Is ConstraintExample_Kusudama.pde compatible with the new api?

I'm not 100% sure. But on the topic of compatibility, I've made a number of further optimizations (>400% speed gains) and improvements / feature additions since the last pull to the master branch, So if you're starting the port, please use the "modularization" branch to make sure you benefit from the latest changes.

fire commented 4 years ago

What is the difference between double and single precision code?

Is it more precise? Faster?

fire commented 4 years ago

Thank you for your help.

The processing example code makes it easier for me to understand the implementation.

EGjoni commented 4 years ago

Hey, sorry. I'm kind of backlogged with work until Friday. So you'll have to wait until then for the more complete Processing examples (there'll be some very useful stuff for your usecase demoed in there that you'll want to look at when I upload it).

What is the difference between double and single precision code?

Is it more precise? Faster?

The double precision version is a little more precise and in Java also a little faster due to how Java's trigonometric operations are handled (specifically, Java's trig functions are on doubles, and so the trigonometric operations for the single precision format have the same cost as the double precision format + the cost of casting from double to float).

The reason for the single precision library is pragmatic. Specifically

  1. If a developer wants to integrate the library into some existing Java framework for games development or graphics applications which use single-precision based transformations, then they can do so more efficiently / conveniently using the single-precision version of the library and
  2. If a developer wants to port the code outside of Java to a framework that relies on single precision transformations, they don't have to worry about whether any differences in the results / stability of their port are due to precision differences, or implementation errors. If a single precision port is unstable, they have a single precision Java implementation available to check the output against.

Sorry again for the delay. Here is the shader code in the meantime if it helps.

#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif

varying vec4 vertColor;

//Model space normal direction of the current fragment
//since we're on a sphere, this is literally just the fragment's position in 
//modelspace
varying vec3 vertNormal;

//This shader can display up to 10 cones (represented by 30 4d vectors [tangentCone, limitCone, tangentCone]) 
// alphachannel represents radius, rgb channels represent xyz coordinates of 
// the cone direction vector in model space
uniform vec4 coneSequence[30];
uniform int coneCount; 

//Make this "true" for sceendoor transparency (randomly discarding fragments)
//so that you can blur the result in another pass. Otherwise make it  
//false for a solid shell.  
uniform bool multiPass;

//Following three varyings are 
//Only used for fake lighting. 
//Not conceptually relevant
varying vec3 vertWorldNormal;
varying vec3 vertLightDir;
varying vec4 vertWorldPos;

///NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING
float hash( uint n ) { // from https://www.shadertoy.com/view/llGSzw  Base: Hugo Elias. ToFloat: http://iquilezles.org/www/articles/sfrand/sfrand.htm
    n = (n << 13U) ^ n;
    n = n * (n * n * 15731U + 789221U) + 1376312589U;
    return uintBitsToFloat( (n>>9U) | 0x3f800000U ) - 1.;
}

float noise(vec2 U) {
    return hash(uint(U.x+5000.0*U.y));
}

bool randBit(vec2 U) {
    float dist2 = 1.0;
    return 0.5 < (noise(U) * 4. -(noise(U+vec2(dist2,0.))+noise(U+vec2(0.,dist2))+noise(U-vec2(0.,dist2))+noise(U-vec2(dist2,0.))) + 0.5);
}
///END OF NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING.

//Determine if the fragment is in the path between two limitcones. 
//this should only be relied on after verifying that the fragment
//isn't in the two limitcones its checking between. 
bool isInInterConePath(in vec3 normalDir, in vec4 tangent1, in vec4 cone1, in vec4 tangent2, in vec4 cone2) {           
    vec3 c1xc2 = cross(cone1.xyz, cone2.xyz);       
    float c1c2dir = dot(normalDir, c1xc2);

    if(c1c2dir < 0.0) { 
        vec3 c1xt1 = cross(cone1.xyz, tangent1.xyz); 
        vec3 t1xc2 = cross(tangent1.xyz, cone2.xyz);    
        float c1t1dir = dot(normalDir, c1xt1);
        float t1c2dir = dot(normalDir, t1xc2);

        return (c1t1dir > 0.0 && t1c2dir > 0.0); 

    }else {
        vec3 t2xc1 = cross(tangent2.xyz, cone1.xyz);    
        vec3 c2xt2 = cross(cone2.xyz, tangent2.xyz);    
        float t2c1dir = dot(normalDir, t2xc1);
        float c2t2dir = dot(normalDir, c2xt2);

        return (c2t2dir > 0.0 && t2c1dir > 0.0);
    }   
    return false;
}

//determines the current draw condition based on the desired draw condition in the setTo argument
// -3 = disallowed entirely; 
// -2 = disallowed and on tangentCone boundary
// -1 = disallowed and on controlCone boundary
// 0 =  allowed and empty; 
// 1 =  allowed and on controlCone boundary
// 2  = allowed and on tangentCone boundary
int getAllowabilityCondition(in int currentCondition, in int setTo) {
    if((currentCondition == -1 || currentCondition == -2)
        && setTo >= 0) {
        return currentCondition *= -1;
    } else if(currentCondition == 0 && (setTo == -1 || setTo == -2)) {
        return setTo *=-2;
    }   
    return max(currentCondition, setTo);
}

//returns 1 if normalDir is beyond (cone.a) radians from cone.rgb
//returns 0 if normalDir is within (cone.a + boundaryWidth) radians from cone.rgb 
//return -1 if normalDir is less than (cone.a) radians from cone.rgb
int isInCone(in vec3 normalDir, in vec4 cone, in float boundaryWidth) {
    float arcDistToCone = acos(dot(normalDir, cone.rgb));
    if(arcDistToCone > (cone.a+(boundaryWidth/2.))) {
        return 1; 
    }
    if(arcDistToCone < cone.a-(boundaryWidth/2.)) {
        return -1;
    }
    return 0;
} 

//returns a color corresponding to the allowability of this region, or otherwise the boundaries corresponding 
//to various cones and tangentCone 
vec4 colorAllowed(in vec3 normalDir,  in int coneCount, in float boundaryWidth) {
    normalDir = normalize(normalDir);
    int currentCondition = -3;

    if(coneCount == 1) {
        vec4 cone = coneSequence[0];
        int inCone = isInCone(normalDir, cone, boundaryWidth);
        inCone = inCone == 0 ? -1 : inCone < 0 ? 0 : -3;
        currentCondition = getAllowabilityCondition(currentCondition, inCone);
    } else {
        for(int i=0; i<coneCount-1; i+=3) {

            int idx = i*3; 
            vec4 cone1 = coneSequence[idx];
            vec4 tangent1 = coneSequence[idx+1];            
            vec4 tangent2 = coneSequence[idx+2];            
            vec4 cone2 = coneSequence[idx+3];

            int inCone1 = isInCone(normalDir, cone1, boundaryWidth);

            inCone1 = inCone1 == 0 ? -1 : inCone1 < 0 ? 0 : -3;
            currentCondition = getAllowabilityCondition(currentCondition, inCone1);

            int inCone2 = isInCone(normalDir, cone2, boundaryWidth);
            inCone2 =  inCone2 == 0 ? -1 : inCone2  < 0 ? 0 : -3;
            currentCondition = getAllowabilityCondition(currentCondition, inCone2);

            int inTan1 = isInCone(normalDir, tangent1, boundaryWidth); 
            int inTan2 = isInCone(normalDir, tangent2, boundaryWidth);

            if( inTan1 < 1. || inTan2  < 1.) {          
                inTan1 =  inTan1 == 0 ? -2 : -3;
                currentCondition = getAllowabilityCondition(currentCondition, inTan1);
                inTan2 =  inTan2 == 0 ? -2 : -3;
                currentCondition = getAllowabilityCondition(currentCondition, inTan2);
            } else {                 
                bool inIntercone = isInInterConePath(normalDir, tangent1, cone1, tangent2, cone2);
                int interconeCondition = inIntercone ? 0 : -3; 
                currentCondition = getAllowabilityCondition(currentCondition, interconeCondition);                  
            }
        }
    }   

    if(multiPass && (currentCondition == -3 || currentCondition > 0)) {

        /////////
        //CODE FOR FANCY BLURRED TRANSPARENCY. 
        //NOT OTHERWISE CONCEPTUALLY RELEVANT TO 
        //TO VISUALIZATION
        ////////

        vec3 randDir = vec3(normalDir.x  * noise(normalDir.xy)/50.0,  normalDir.y  * noise(normalDir.yz)/50.0, normalDir.z  * noise(normalDir.zx)/50.0);
        randDir = normalDir;
        float zAdd = abs(vertWorldPos.z);
        float lon = atan(randDir.x/randDir.z) + 3.14159265/2.0;
        float lat = atan(randDir.y/randDir.x) + 3.14159265/2.0;

        bool latDraw = randBit(vec2(lat, lon));//mod(lat, 0.005) < 0.00499;
        bool lonDraw = randBit(vec2(lon, lat));//mod(lon, 0.005) < 0.00499;

        if(randBit(vec2(lon, lat))) {       
            result = vec4(0.0,0.0,0.0,0.0); 
        }
        ////////
        //END CODE FOR FANCY BLURRED TRANSPARENCY
        ///////
    } else if (currentCondition != 0) {

        float onTanBoundary = abs(currentCondition) == 2 ? 0.3 : 0.0; 
        float onConeBoundary = abs(currentCondition) == 1 ? 0.3 : 0.0;  

        //return distCol;
        result += vec4(0.0, onConeBoundary, onTanBoundary, 1.0);
    } else {
        discard;
    }
    return result;

}

void main() {

  vec3 normalDir = normalize(vertNormal); // the vertex normal in Model Space.
  float lightScalar = dot(vertLightDir, vec3(0.5,-1.,0.5)); 
  lightScalar *= lightScalar*lightScalar;
  vec4 colorAllowed = colorAllowed(normalDir, coneCount, 0.02);  

  if(colorAllowed.a == 0.0)
    discard;

  colorAllowed += (colorAllowed + fwidth(colorAllowed)); 
  colorAllowed /= 2.0;
  vec3 lightCol = vec3(1.0,0.8,0.0);
  float gain = vertWorldNormal.z < 0 ? -0.3 : 0.5;
 colorAllowed.rgb = (colorAllowed.rgb + lightCol*(lightScalar + gain)) / 2.;
 vec4 specCol = vec4(1.0, 1.0, 0.6, colorAllowed.a);  
 colorAllowed = colorAllowed.g > 0.8 ? colorAllowed+specCol : colorAllowed;     

  gl_FragColor = colorAllowed;
}
EGjoni commented 4 years ago

Additionally, do note that if you're making a single precision port, you should reference the single precision implementation, specifically with regard to the QCP.java class, as the single precision version of QCP includes some modifications for numerical stability which the double precision version can and does afford to skip.

fire commented 4 years ago

My plan is to make a single precision port, that has an alias for using double precision. Thanks for the insight.

EGjoni commented 4 years ago

Alright, I just pushed the latest code (with performance and stability improvements) to the modularization branch in this repository, and updated the rc branch of the processing extension repository with a bunch of new commented examples / demos.

Particularly, you'll most want to take a look at FullExample, which programmatically specifies a humanoid armature, and the Humanoid_Holding_Item demo, which loads an armature from JSON, and demos simultaneous head pelvis and wrist tracking.

fire commented 4 years ago

I'm away this weekend, but I'm starting to get back into this code.

For Godot Engine, I have gltf2 scene export. In theory, anything that is in a scene can be exported as possiblity animated geometry.

EGjoni commented 4 years ago

Interesting. I was unaware of gtlf2 as a thing that exists.

It should be moderately straightforward to convert from EWBIKs save/load format to gltf2. The formats are already extremely similar, with the exception that EWBIK's format allows for skew/shear, and EWBIKs quaternions are saved in [w,x,y,z] order (as opposed to gltf2's [x,y,z,w]). And the exception that EWBIK's format references nodes by identity hash, instead of by array index.

But I'm not sure if it would be more straightforward than just porting the save/load code already provided.

fire commented 4 years ago

The idea is that this code is included into Godot ported into C++.

The animations are captured and constrained to the sensor positions.

The animations can be exported again from the software as something Blender can read.

My plan is to port the code using the internal apis.

Exporting the GLTF2 format is notable because it can be used to export scenes from your projects though.

fire commented 4 years ago

I'm working on the api, the IK system isn't functional yet, but trying to make the interface usable. unknown

EGjoni commented 4 years ago

Tantalizing!

Keep me updated on any interesting progress you make or approaches you take or snags you hit, if you could.

It would be immensely helpful for figuring out the least painful ways to structure future versions of the library.

EGjoni commented 4 years ago

(Also, the algorithm isn't technically CCD. Something like "Cyclic Mean Deviation Descent" [CMDD] might be more accurate, give or take a few letters.)

fire commented 4 years ago

I renamed the class "Cyclic Mean Deviation Descent" [CMDD].

Also, the constraints are on the entire skeleton rather than a bone chain right?

fire commented 4 years ago

It should be possible in theory to use this for skeletal animation re-targeting too.

[Make the scale relatively match]

[Set both skeletons to a standard pose, let's say an "A" pose]

[Reset the original skeleton and animations to be based on the "A" pose]

[Map original skeleton bone to the new skeleton bone]

For each key-frame in the original skeleton set the original bone as a target and re-target every key-framed bone to the new kusudama constrained rig.

Thoughts?

fire commented 4 years ago

I'm having trouble understanding what these are:

  sb.simLocalAxes.updateGlobal();
  sb.forBone.setAxesToSnapped(sb.simLocalAxes, sb.simConstraintAxes, boneDamp);
  sb.simLocalAxes.translateByGlobal(translateBy);
  sb.simConstraintAxes.translateByGlobal(translateBy);

What are simLocalAxes and simConstraintAxes? What space are they in? Global or the bone relative?

What is updateTargetHeadings() or updateTipHeadings()?

You also swap between getGlobalMBasis and the local Basis.

In theory a basis can be both scale and rotation, but I think the code only uses it for rotation.

Is GlobalMBasis the concatenation of every basis to the root?

fire commented 4 years ago

How is a pin different from a bone tip or a bone target (aka multi effector target)? It was a bit unclear.

EGjoni commented 4 years ago

Also, the constraints are on the entire skeleton rather than a bone chain right?

If you mean joint constraints (Kusudamas), they are per bone. If you mean the targets or "pins", those are evaluated over the entire section of the skeleton which they bound. Meaning, if you pin a humanoid skeleton at the pelvis, left wrist, and right wrist; then moving the pin/target of the left wrist around enough would likely result in bones in both the spine and the right arm rotating in order to keep satisfying the requirements of all three targets.

For each key-frame in the original skeleton set the original bone as a target and re-target every key-framed bone to the new kusudama constrained rig.

Thoughts?

For clarity let's call the original animated skeleton the "source rig", and the target animation skeleton the "destination rig".

In theory you don't even need to bother with the A-Pose part. Meaning, if you set the Source Rig to have roughly the same scale as the Destination Rig, then it doesn't matter if the two share the same starting pose, nor does it really even matter if there is a one-to-one mapping in the number of bones between the two rigs. All you would have to do (in theory, but I haven't actually tried it) is decide which joints on the Source Rig best correspond to which joints on the Destination Rig. Then, simply enable targets on the bones of interest in the Destination Rig, and have those targets follow the appropriate joints of interest on the Source Rig.

I imagine you would get worse results by trying to match every single bone between the two rigs than by trying to match just the bones of interest.

Is GlobalMBasis the concatenation of every basis to the root?

Yes. GlobalMBasis is basically a cache of the composed transformation of a base with all of its ancestor bases up to and including the transformation at the armature's root. GlobalMBasis is used to avoid recalculating a basis from scratch when none of its parent bases have changed. "updateGlobal()" basically updates this cached version. There are probably a few places I call updateGlobal() where it doesn't need to be explicitly called (since the node-hierarchy system is already autocaching results). The cost of calling it when the result is already cached is negligible, but I should clean those up at some point.

What are simLocalAxes and simConstraintAxes? What space are they in? Global or the bone relative?

The "LocalAxes" part in simLocalAxes is referring only to fact that they are derived from AbstractBone's "localAxes" object. The "localAxes" of a bone are named that due to an artifact of early implementations, which lacked a dedicated scenegraph. Now that you bring it up, I can see how that would be horribly confusing.
To clarify: The "sim" part of both "simLocalAxes" and "simConstraintAxes" denotes that they are scratchpad Axes derived from but operated upon separately from the Axes on the actual armature. In essence, the solver maintains something like a doppleganger of your Armature. Each time you call the solver, or update the topology of the armature, the Axes on that doppleganger are updated to have all of the same transformations and parent-child relationships as the Armature. This is so that the solver can run for multiple iterations, and even compute otherwise illegal transformations, without modifying your actual armature until its finished running and ready to show you its presentable solution.

You also swap between getGlobalMBasis and the local Basis. Yes, some of the calculations are more cleanly done in global space, others are more cleanly done in local space.

In theory a basis can be both scale and rotation, but I think the code only uses it for rotation.

Yeah. And translation. Why do you mention it?

How is a pin different from a bone tip or a bone target (aka multi effector target)? It was a bit unclear.

A pin and a target are the same thing. A bone-tip is the thing that reaches for a target, aka an end-effector. All of these terms are (again, now that you mention it) also horribly confusing artifacts of previous versions.

In future implementations the following conventions will be adopted: "effector" : will refer to a bone which the solver tries to maintain in a certain position/orientation. "target" : will refer to a position/orientation which an effector tries to match.
"pin" : will be used only to refer to an operation one can do on a bone, meaning "create a target for this bone at the location/orientation where the bone currently is." and function like "enablePin()" will be renamed to "enableTarget()"

EGjoni commented 4 years ago

What is updateTargetHeadings() or updateTipHeadings()?

updateTipHeadings() updates vectors that track the headings of all effectors which are descendants of the bone currently being evaluated by the solver. updateTargetHeadings() updates vectors that track the headings of each of those effector's targets.

fire commented 4 years ago

In Godot Engine Basis is 3x3 matrix representing a rotation and a scale. A transform is a rotation, scale and origin.

I believe you only use LimitingAxes as a rotation.

In the current prototype, I have split the Global Transform from the local transform. Do you think there's any problems with that? Nothing is functional so I can't test.

fire commented 4 years ago

I've used these concepts:

There's a godot engine bone which is replicated in a Chain (similiar to skeleton) and a ChainItem (similiar to bone).

I want to make a ChainItem also an IKTarget depending if it is pinned.

Thoughts?

There is also a

    struct EndEffector {
        BoneId effector_bone;
        Transform goal_transform;
    };
EGjoni commented 4 years ago

In the current prototype, I have split the Global Transform from the local transform. Do you think there's any problems with that? Nothing is functional so I can't test.

Depends on what you mean by "split," and where.

I believe you only use LimitingAxes as a rotation.

IIRC, yes.

I want to make a ChainItem also an IKTarget depending if it is pinned.

If I'm understanding correctly, you want to make Bone A, be the target of Bone B?

If so, then the correct way to do that would be to define Bone B's Target Transform as having Bone A's Axes as its parent (and as having no rotation or translation in the local space of that parent). That way, Bone B's Target always ends up having the same position and orientation as Bone A.

fire commented 4 years ago

Worked on the editor.

[I think the control point, the radius, and the hidden connected to are the main properties of the cone limit.]

image

Worked heavily on the internal implementation which is not done.

[Edited picture]

fire commented 4 years ago

@EGjoni Do you know what this is in degrees?

https://github.com/EGjoni/Everything-Will-Be-IK/blob/modularization/ewbIK/src/IK/floatIK/AbstractKusudama.java#L48

The range variable default?

I'm keeping the range in radian angles rather than TAU because the documentation is confusing.

EGjoni commented 4 years ago

Do you know what this is in degrees?

In degrees that would be 540, which is equivalent to 180. It can just as well be changed to just Math.PI (It's so high IIRC because I was testing that the modulo logic was working correctly).

I'm keeping the range in radian angles rather than TAU because the documentation is confusing.

I'm not sure I understand what you mean by "radian angles rather than TAU". TAU is already in radian angles. TAU = 2*PI = a full 360 degree turn.

because the documentation is confusing.

That much is absolutely true. But do be careful about taking liberties with any portion of the implementation upon which the "snapToTwistLimits()" function relies. It's a complicated function that assumes very particular things about the representation its working with so that the library overall can be agnostic to the handedness of the Axes being used.

Most game engines decide in advance that all bases should be right-handed, so you probably don't actually need this level of agnosticism you can simplify things for your purposes, but do note that if you want to simplify the minAxial and maxAxialRange implementation, you will certainly need to rewrite the logic of snapToTwistLimits(). (In fact, you can see a leftover comment in that function from when I made a quick attempt at structuring it to not be agnostic).

If you intend to try your luck with such simplifications, I strongly suggest holding off on them until AFTER you've made a more naive one-to-one port that you can verify is working.

fire commented 4 years ago

What I did was keep the axial [limits] as a data only struct. When update it I convert from degrees to radians.

When I use it I replace every place it is used with to_tau and from_tau functions.

EGjoni commented 4 years ago

That sounds reasonable (so long as you're not converting back and forth internally every time the solver uses it)

By the way, in your screenshot, what are OverrideTipBasis and UseMagnet?

fire commented 4 years ago

They're from the api of the current Godot Engine IK fabrik solver. I haven't touched it.

My friend mentioned that limit cones and axial limits are hard to understand.

Bullet calls axial limits sliders with rotations.

image

Bullet calls limit cones spherical constraints.

image

https://www.panda3d.org/manual/?title=Bullet_Constraints

The axial and cone limits are still hard to understand.

We talked about using the words twist vs rotation.

Any suggestions?

EGjoni commented 4 years ago

How about "twist" and "direction"?

EGjoni commented 4 years ago

(Also, I think Bullet's "Slider" constraints allow for translation along an axis in addition to rotation around it. AxialLimits limits, which you would be referring to as "twist", do not support translation, and in general ewbIK does not support prismatic joints!)

fire commented 4 years ago

"twist" for axial limits and "direction" for cone limits makes sense.

We also discussed that changing the limit cone control point and radius is difficult via a vector 3 and a float.

They need to be controllable by a graphical representation.

We're imagining a handle that rotates the pole extended from the center of the sphere outwards through the control point or moving the point on the sphere directly through a screen space to sphere space transform.

Thoughts?

EGjoni commented 4 years ago

What I do is show a sphere which the user can click on. Wherever the user clicks / drags, a ray is cast from the camera through the sphere. The closest of those intersections to the camera represents the direction of the LimitCone when transformed to the local reference frame of the constraint..

For letting the user specify radius, I put a little draggable handle on the perimeter of the cone, and the user can drag that outward or inward from the cone's on-screen center to make the cone wider or thinner.

That approach is easy to program and intuitive to use, but does have the drawback of requiring that the user change their viewing angle if they want to drag their cone to be on the opposite side of the sphere. The most bang-for-buck solution to programming a convenience for that is to change which of the two cameraray-constraintsphere intersection points is used based on the number of times the user has dragged outside of the sphere in the process of manipulating the cone's control point.

fire commented 4 years ago

I had a lot of trouble exporting the processing demos as stand alone executables. The shaders were not able to be found.

fire commented 4 years ago

Can you explain how alignToAxes works? Why do you do this?

/**

I have a IKAxes class and three variables in a ChainItem which is the substitute of your bones class.

// Positions relative to the root bone Transform initial_transform; // Position from this bone to the child Vector3 current_pos; // Direction from this bone to the child Vector3 current_ori;

My problem is converting alignToAxes().

Chain is my substitute for your armature class. This is because Godot Engine already has a Skeleton and Bones.

fire commented 4 years ago

image

Some parts are working! Not very well, but it's a good start.

EGjoni commented 4 years ago

I had a lot of trouble exporting the processing demos as stand alone executables. The shaders were not able to be found.

Does the issue occur when running the demos from within the processing IDE?

Can you explain how alignToAxes works? Why do you do this?

It used to be a convenience method, but I don't think it's used at all anymore.

I have a IKAxes class and three variables in a ChainItem which is the substitute of your bones class. // Positions relative to the root bone Transform initial_transform; // Position from this bone to the child Vector3 current_pos; // Direction from this bone to the child Vector3 current_ori; My problem is converting alignToAxes(). Chain is my substitute for your armature class. This is because Godot Engine already has a Skeleton and Bones.

You've lost me. Are the variables listed (intial_transform, current_pos, current_ori) part of your IKAxes class, or part of your ChainItem (bone) class?

Some parts are working! Not very well, but it's a good start.

Looks like a modern art installation!

fire commented 4 years ago

Here's the https://github.com/fire/Everything-Will-Be-IK-Processing/tree/v3-rc branch with the changes.

The key insight is the data folder is where the ".pde" is. I copied the armature and the shaders because it was simpler but less elegant. Can you think of a better way?

You've lost me. Are the variables listed (intial_transform, current_pos, current_ori) part of your IKAxes class, or part of your ChainItem (bone) class?

"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

EGjoni commented 4 years ago

The key insight is the data folder is where the ".pde" is. I copied the armature and the shaders because it was simpler but less elegant. Can you think of a better way?

Sorry, could you be more specific? What is the goal you are trying to accomplish by copying the .pde files into the data folder. (Also, I don't see any .pdes in the data folder of the branch you linked to). Is this to try to get the examples running in the Processing IDE, or to try to get the examples running as self-contained executables? Or to try to port the shaders for use with the Godot engine? I'm not clear.

"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

If initial_transform is supposed to correspond to localAxes, then it should be defined relative to the bone's parent bone's transform, not relative to the root bone's transform.

// Position from this bone to the child Vector3 current_pos; // Direction from this bone to the child Vector3 current_ori;

A bone can have multiple child bones, so what does it mean to define a bone's position or orientation relative to the position or orientation of just one of its children?
Also defining current_ori as a Vector3 would imply something like a Euler angle representation for orientation, but the library operates on quaternion rotations. So what's going on there?

fire commented 4 years ago

image

The goal is to get standalone processing binaries exported from processing IDE. This is also known as self-contained executables.

EGjoni commented 4 years ago

And what's the purpose of trying to get standalone processing binaries exported from the processing IDE?

fire commented 4 years ago

My use case was to demonstrate your proof of concept to interested friends without learning git, processing IDE and other misc thing.

I wanted other people to evaluate the proof of concept. Someone who's an artist or a rigger.

I'm also typing a response for the transforms question.

fire commented 4 years ago
"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class.

If initial_transform is supposed to correspond to localAxes, then it should be defined relative to the bone's parent bone's transform, not relative to the root bone's transform.

// Position from this bone to the child
Vector3 current_pos;
// Direction from this bone to the child
Vector3 current_ori;

A bone can have multiple child bones, so what does it mean to define a bone's position or orientation relative to the position or orientation of just one of its children?

Also defining current_ori as a Vector3 would imply something like a Euler angle representation for orientation, but the library operates on quaternion rotations. So what's going on there?

That was the first question that came to mind too. Why does the Godot IK api split the rotation in euler angles from the translation when the Transform data structure handles everything.

Godot's internal Transform is a 3x4 matrix consisting of a 3x3 matrix for Basis (rotation and scale) and an Vector3 origin for translation.

A Basis is convertable to Quat.

My thought was to remove the current_ori and current_pos and use one transform. Do I want it to be global or local? Do I want a transform for the global transform (relative to the root bone) and a local transform (relative to the parent bone).