zturtleman / mm3d

Maverick Model 3D is a 3D model editor and animator for games.
https://clover.moe/mm3d
GNU General Public License v2.0
114 stars 22 forks source link

Add Projection rotation to sidebar #114

Open m-7761 opened 4 years ago

m-7761 commented 4 years ago

These things can be rotated, but their coordinates are hidden/unable to be edited from what I can see. Note the sidebar is purely a beast of the "implui" code.

EDITED: They also have a Scale component, although there isn't any Scale information in the sidebar at this time.

m-7761 commented 4 years ago

The following methods are bogus. Their signatures are probably copy/pasted from set/getProjectionType. The rotation happens in rotateSelected. I suppose they should be implemented, and that much falls outside "implui".

bool setProjectionRotation(unsigned proj, int type);
        int  getProjectionRotation(unsigned proj)const;
zturtleman commented 4 years ago

It shouldn't be too difficult to add. Similar to bone joint rotation (https://github.com/zturtleman/mm3d/commit/c010973b6451bcfa44370b728b24832cb407eeec). Scale would eventually be needed for bone joints as well (#35).

m-7761 commented 4 years ago

It's not so ideal since the representation is two vectors. One is an "up" vector that isn't traditionally orthogonal, so unless the representation is changed to be more editor friendly it will be quirky (setting the rotation will give you back different values) ... I may change that if it looks easy enough to change. It'll probably take me some time to implement.

The scale is the length of the up-vector. Does #35 require nonuniform scaling? I don't know off the top of my head if the projection shapes all make sense for nonuniform scaling.

m-7761 commented 4 years ago

For the record, may as well add "Name" to the list.

zturtleman commented 4 years ago

Ah, I hadn't looked into how Projection rotation was stored. Bone joints (#35) would need nonuniform scaling. It's fine if scale UI can't share between projection and bone joints.

m-7761 commented 4 years ago

Oh yeah, I just noticed "XYZ axis scale" in issue 35 just now. Since I'm adding to the animation features I wonder if there are any other features that can be added at the same time. Adding scale is trivial for the skeleton system.

m-7761 commented 4 years ago

Follow-up: I think I am going to make a base class for Joint, Point, and the projection objects, just to make things simple, that way it's probably possible to do away with a lot of redundant code, and the existing code can be leveraged so the conversion work doesn't require a lot of thinking.

That means the mm3dfilter code needs to convert to a rotation on read, but on write I think it would be best to establish a new flag and write the rotation. I've made the new animation code to only write the new flag's format, since I assume new versions shouldn't have to write files to work with old versions. That could be changed but it would be more work, so not worth doing unless someone has a legit reason to require it.

Scale will be in the base class too. So instead of writing "up" and "seam" it can write rotation/scale, and I think this solves the question of if Point should have a scale, since it will get one via the base class. At present the size of the points don't always make sense for a model's size. I don't know if frame animations should be able to animate point position and rotation independent of each other... they're fused right now, and adding scale would further complicate matters. It could be fudged in later by composing synthetic values for the interpolation mode.

Anyway, I prefer to write code that adds value than write code that I know is ill-conceived to patch ill-conceived code since best case that code gets removed at some point and then the act of writing it is reduced to wasted time in the grand scheme.

m-7761 commented 4 years ago

For lack of a better idea I'm calling this new base class an Object and I think it would be useful to later incorporate a concept of objects into the models to permit some degree of control like all modeling software provides. But conceptually the base class is just a name, position, rotation, and scale.

Off-topic: Point::m_type is an unused field that isn't zero-initialized when writing MM3D files, so it can't be read reliably without a new flag. Until then it's wasted space in the file. I'm removing get/setPointType.

m-7761 commented 4 years ago

Here is code for a change I worked on the last day or so just to begin. The name of "Object2020" can be changed after the code is accepted whenever we try to merge this work into the mainline Qt branch. This lays groundwork for adding scaling to the mix, eliminating all but one modelundo.cc object. It was hard enough to follow everything, so I just translated the existing code without any changes. There are several holes in these APIs mainly owing to the positional vectors aren't normally set this way. The UI calls translateSelected instead, and there are no analogues to "translation" otherwise.

I had to remove setBoneJointTranslation because its semantics were wrong (relative versus absolute) but it was only used by deleteBoneJoint (which shouldn't be using it since it calls setupJoints in a loop redundantly.)

I standardized on "Coords" for positional vectors and added Unanimated versions to get rid of some unclear euphemisms that just looked like synonymous APIs.

(Note, I'm not publishing this, or any, code until I finish working on the new animation features I'm developing.)

    ////Object2020////Object2020////Object2020////Object2020/////

    /*BACKGROUND
    https://github.com/zturtleman/mm3d/issues/114

    This refactor simplifies a lot of logic and removes countless
    modelundo.cc objects from the mix.

    The pass-through APIs are legacy compatibility functions. The
    various APIs have been ported as-is for the time being. There
    is little consistency between the various types of "Position"
    classes. I've removed some euphemistic method names, in favor
    of the "Unanimated" construction that seems better suited for
    conveying intent. Note, only preexisting APIs are implemented.
    */

Model::Object2020 *Model::getPositionObject(const Position &pos)
{
    void *v; switch(pos.type)
    {
    case PT_Joint: v = &m_joints; break;
    case PT_Point: v = &m_points; break;
    case PT_Projection: v = &m_projections; break;
    case _OT_Background_: v = &m_background; break;
    default: return nullptr;
    }
    auto &vec = *(std::vector<Object2020*>*)v;
    return pos.index<vec.size()?vec[pos]:nullptr;
}

bool Model::setPositionCoords(const Position &pos, const double *coords)
{
    Object2020 *obj = getPositionObject(pos); if(!obj) return false;
    auto undo = new MU_SetObjectXYZ(pos);
    undo->setXYZ(this,obj->m_abs,coords); 
    sendUndo(undo,true);
    memcpy(obj->m_abs,coords,sizeof(*coords)*3); return true;
}
bool Model::getPositionCoords(const Position &pos, double *coord)const
{
    if(pos.type==PT_Vertex) return getVertexCoords(pos,coord);
    auto *obj = getPositionObject(pos); if(!obj) return false;  
    memcpy(coord,obj->m_absSource,sizeof(*coord)*3); return true;
}

bool Model::setPositionRotation(const Position &pos, const double *rot)
{
    Object2020 *obj = getPositionObject(pos);
    if(!obj) return false;

    if(PT_Point==pos.type) //HACK: Compatibility fix.
    {
        if(m_animationMode==ANIMMODE_FRAME)
        return setFrameAnimPointRotation
        (m_currentAnim,m_currentFrame,pos,rot[0],rot[1],rot[2]);
    }

    auto undo = new MU_SetObjectXYZ(pos);
    undo->setXYZ(this,obj->m_rot,rot); 
    sendUndo(undo,true);
    memcpy(obj->m_rot,rot,sizeof(*rot)*3);

    if(pos.type==PT_Joint) setupJoints(); //!!

    return true;
}
bool Model::getPositionRotation(const Position &pos, double *rot)const
{
    auto *obj = getPositionObject(pos); if(!obj) return false;  
    memcpy(rot,obj->m_rotSource,sizeof(*rot)*3); return true;
}

bool Model::setPositionScale(const Position &pos, const double *scale)
{
    Object2020 *obj = getPositionObject(pos); if(!obj) return false;
    auto undo = new MU_SetObjectXYZ(pos);
    undo->setXYZ(this,obj->m_xyz,scale); 
    sendUndo(undo,true);
    memcpy(obj->m_xyz,scale,sizeof(*scale)*3); return true;
}
bool Model::getPositionScale(const Position &pos, double *scale)const
{
    auto *obj = getPositionObject(pos); if(!obj) return false;  
    memcpy(scale,obj->m_xyzSource,sizeof(*scale)*3); return true;
}

bool Model::setPositionName(const Position &pos, const char *name)
{
    auto *obj = getPositionObject(pos);
    if(obj&&name&&name[0])
    {
        auto *undo = new MU_SetObjectName();
        undo->setName(pos,name,obj->m_name.c_str());
        sendUndo(undo);

        obj->m_name = name;
        return true;
    }
    else return false;
}
const char *Model::getPositionName(const Position &pos)const
{
    auto *obj = getPositionObject(pos);
    return obj?obj->m_name.c_str():nullptr;
}

bool Model::setPointCoords(unsigned point, const double *trans)
{
    return setPositionCoords({PT_Point,point},trans);
}
bool Model::setProjectionCoords(unsigned proj, const double *coord)
{
    return setPositionCoords({PT_Projection,proj},coord);
}
bool Model::setBackgroundCenter(unsigned index, float x, float y, float z)
{
    double coord[3] = {x,y,z};
    return setPositionCoords({_OT_Background_,index},coord);
}
bool Model::getBoneJointCoords(unsigned joint, double *coord)const
{
    return getPositionCoords({PT_Joint,joint},coord);
}
bool Model::getPointCoords(unsigned point, double *coord)const
{
    return getPositionCoords({PT_Point,point},coord);
}
bool Model::getPointCoordsUnanimated(unsigned point, double *coord)const
{
    if(!coord||point>=m_points.size()) return false; //???

    memcpy(coord,m_points[point]->m_abs,sizeof(*coord)*3); return true;
}
bool Model::getProjectionCoords(unsigned proj, double *coord)const
{
    return getPositionCoords({PT_Projection,proj},coord);
}
bool Model::getBackgroundCenter(unsigned index, float &x, float &y, float &z)const
{
    double coord[3];
    if(!getPositionCoords({_OT_Background_,index},coord)) return false;
    x = (float)coord[0]; 
    y = (float)coord[1]; 
    z = (float)coord[2]; return true;
}

bool Model::getPointRotation(unsigned point, double *rot)const
{
    return getPositionRotation({PT_Point,point},rot);
}
bool Model::getPointRotationUnanimated(unsigned point, double *rot)const
{
    if(!rot||point>=m_points.size()) return false; //???

    memcpy(rot,m_points[point]->m_rot,sizeof(*rot)*3); return true;
}
bool Model::setBoneJointRotation(unsigned joint, const double *rot)
{
    return setPositionRotation({PT_Joint,joint},rot);
}
bool Model::setPointRotation(unsigned point, const double *rot)
{
    return setPositionRotation({PT_Point,point},rot);
}

bool Model::setBackgroundScale(unsigned index, float scale)
{
    double xyz[3] = {scale,scale,scale};
    return setPositionScale({_OT_Background_,index},xyz);
}
float Model::getBackgroundScale(unsigned index)const
{
    double xyz[3];
    if(getPositionScale({_OT_Background_,index},xyz)) 
    return (float)xyz[0]; return 0; //???   
}

bool Model::setBoneJointName(unsigned joint, const char *name)
{
    return setPositionName({PT_Joint,joint},name);
}
bool Model::setPointName(unsigned point, const char *name)
{
    return setPositionName({PT_Point,point},name);
}
bool Model::setProjectionName(unsigned proj, const char *name)
{
    return setPositionName({PT_Projection,proj},name);
}
bool Model::setBackgroundImage(unsigned index, const char *str)
{
    return setPositionName({_OT_Background_,index},str);
}
const char *Model::getBoneJointName(unsigned joint)const
{
    return getPositionName({PT_Joint,joint});
}
const char *Model::getPointName(unsigned point)const
{
    return getPositionName({PT_Point,point});
}
const char *Model::getProjectionName(unsigned proj)const
{
    return getPositionName({PT_Projection,proj});
}
const char *Model::getBackgroundImage(unsigned index)const
{
    return getPositionName({_OT_Background_,index});
}
m-7761 commented 4 years ago

I've had some success switching the projections to a rotation and scale model. It makes more sense to use rotation to set them up based on how their tool works. It was using the Shift to constrain to horizontal/vertical coordinates, but doesn't make sense really since the tool basically wants to rotate, so I switched it to using the Shift to lock to 15 degrees angles convention.

Because the view matrix's rotation placed the seam on the front, I've changed it to draw the seam on the back, and reverse the seam when loading the old format files. That means that when the projection is set up on the Front view its rotation is 0,0,0. I went to this length just so 0,0,0 is some sensible orientation.

All that remains is to change the seam convention for the applyRotation operation. Speaking of which, I think that it makes sense to not automatically call that function from the transformation routines, since often it will be applied more than once for each class of transformation. I also added today an enum to defer apply animation changes after setting the time since that can wait until the model is drawn or sampled. I run into this need from having the mouse resolution being somehow higher than the frame rate's.

(I think that same system could be used to retain normals if it's not desired to animate them, but currently at least the face normals need to be updated so the BSP-tree feature works, so that's not an option right now.)

m-7761 commented 4 years ago

Short of adding scaling to the sidebar I'm finished with this task. In the course I added a getMatrix method to the Object class itself that makes things much more straightforward in many cases.

I couldn't figure out if dragging the mouse up or down should result in rotation (flipping the projection over) or vice versa, and I struggled to come up with an argument or find a parallel elsewhere... until I realized the tie breaker could be whatever the current release does. In that case dragging down (front/identity view) results in upside-down projection.

I found that creating projections was difficult since I have that bound to Shift+F12, plus F12 breaks into the program with Visual Studio so can't be used in release builds...

What I think would be a nice change would be to remove bones and projections from the toolbar, and make the Point items generalized "Object" creation/selection tools and assign that to F10/O respectively. In that system you'd have to toggle what kinds of objects you want in the parameters area, but I think it would be an improvement overall, seeing as how you can think of many more classes of point-like objects that can't all fit into the UI with their own dedicated tools/hotkeys. It's a slight hindrance but I think these are low traffic features. Unfortunately I can't see a way to assign hotkeys to the parameters bar elements.

I think a general "object" that is basically a hierarchical transform system would be a good addition. I've given it some thought, and think that what would fit most easily is to keep all of the vertex coordinates in global space but use the objects (assign vertices to them, like triangles to groups) to track object level transformations, that way import/export modules wouldn't have to deal with this layer of complexity unless they want to inverse-transform the vertices into their objects' coordinate systems, and likewise only object manipulation code would have to address such things. I don't know if that's how other software does it, but I would guess that the other way around is more conventional.

A simplistic view of such objects would be named vertex groups with transform parameters like joints. It would be nice to be able to use them to track layers too. (I think that'd be ambiguous if triangles share vertices assigned to different objects, but that case could simply be down to what is the last layer operation.)