uwgraphics / relaxed_ik

RelaxedIK Solver
MIT License
186 stars 50 forks source link

"lookat" objective function #24

Closed nancyhong123 closed 3 years ago

nancyhong123 commented 3 years ago

Hi Daniel!

I am an undergraduate student working to complete an undergraduate research paper. I would like to apply relaxed_ik to a camera manipulator robot but I am having to trouble formulating the objective function to to point the camera at the target. I've been following your work in Remote Telemanipulation with Adapting Viewpoints in Visually Complex Environments and I am trying to recreate the "lookat" objective function from the paper. I have posted my attempt below.

The position goal I input to the solver is the position of the target, _point_to_linedist is a function that returns the perpendicular distance from the visual target to the viewpoint vector, and I have the weights to give priority to this objective "weight_priors=(50.0,5.0,0.3,0.3,0.3,1.0,2.0) ". Currently this function does not work at all, and the solution seems to configure a viewing direction that is almost in the opposite direction of the desired one. Do you have any pointers on what I may be doing wrong or how I may implement new objective functions in an efficient way?

class Position_MultiEE_Obj(Objective):
    def __init__(self, *args): pass
    def isVelObj(self): return False
    def name(self): return 'Position_MultiEE'

    def __call__(self, x, vars):
        if vars.c_boost:
            x_val = objectives_ext.position_multiEE_obj(vars.frames, vars.goal_positions, [1.0, 1.0])
        else:
            x_val_sum = 0.0

            for i, f in enumerate(vars.frames):
                positions = f[0]
                eePos = positions[-1]
                goal_pos = vars.goal_positions[i]

                z_axis = np.array([0, 0 , 1])

                eeMat = f[1][-1]

                new_mat = np.zeros((4, 4))
                new_mat[0:3, 0:3] = eeMat
                new_mat[3, 3] = 1

                ee_quat = Tf.quaternion_from_matrix(new_mat)

                ee_rot = Quaternion(ee_quat[3],ee_quat[0],ee_quat[1],ee_quat[2])
                ee_orientation_vect = ee_rot.rotate(z_axis)
                ee_orientation_vect = ee_orientation_vect/np.linalg.norm(ee_orientation_vect)
                end = 10*np.array(ee_orientation_vect) + np.array(eePos) 

                x_val = Tf.point_to_line_dist(goal_pos,eePos,end)
                x_val_sum += x_val

            x_val = x_val_sum

        t = 0.0
        d = 2.0
        c = .1
        f = 10
        g = 2

        if vars.c_boost:
            return objectives_ext.nloss(x_val, t, d, c, f, g)
        else:
            return (-math.e ** ((-(x_val - t) ** d) / (2.0 * c ** 2)) ) + f * (x_val - t) ** g
djrakita commented 3 years ago

Hi Nancy, This looks pretty close to correct! Based on your description and a first look through the code, it seems that the issue is somehow related to how you’re calculating the camera’s “forward” vector (the look-at direction). It seems like you’re trying to make this the end-effector frame’s local Z axis based on the line

z_axis = np.array([0, 0 , 1])

And then how you later multiply this by the end-effector quaternion. If the camera seems to be pulled into looking in the opposite direction of the target, maybe try switching this to the -Z axis?

z_axis = -np.array([0, 0 , 1])

Or, more generally, I would recommend just debugging and making sure that this forward vector is exactly what you expect it to be. If you try various versions of this and verify that the forward direction is correct and the optimization still isn’t working, feel free to reach out again.

nancyhong123 commented 3 years ago

Hi Daniel, Thank you so much for your helpful advice! I was able to fix this by playing around with the z_axis vector and setting it to [0,1,0]. I had assumed that the end-effector orientation quaternion was in reference to the robot base frame that I had set in the "start_here.py" file but that does not seem to be the case.