robotlearn / pyrobolearn

PyRoboLearn: a Python framework for Robot Learning
Apache License 2.0
402 stars 62 forks source link

Force control issue #32

Closed TFLQW closed 4 years ago

TFLQW commented 4 years ago

Hi! Thanks for your contribution~ Recently I have try to exploit some code about force control using UR in your simulation framework. But I found the detected force is too high to tracking the reference force, could you attach some force control code using robot arm like kuka or other robots to be referred.

lrozo commented 4 years ago

Hi!

It is great to know what you are working on. In order to address your issue, it'd be good if you can share the code you are working on. Then, I guess we may be able to help you with some hints!

TFLQW commented 4 years ago

@lrozo Sorry for that I cannot attach the whole code, because I have add some other tool model on the robot, so the framework is complex. So I just attach the main part of the code. this part I use bio-dmp to get the reference trajectory position and orientation, but in the force control loop i only apply it on the position part excluding the torque control.

` for t in count(): oo = np.linspace(0, np.shape(y)[1] - 1, np.shape(y)[1]) f0 = interpolate.interp1d(oo, y[1], kind='cubic') f1 = interpolate.interp1d(oo, y[2], kind='cubic') f2 = interpolate.interp1d(oo, y[3], kind='cubic') f3 = interpolate.interp1d(oo, y[4], kind='cubic') f4 = interpolate.interp1d(oo, y[5], kind='cubic') f5 = interpolate.interp1d(oo, y[6], kind='cubic')

    # move sphere
    # sphere.position = np.array([0.45, r * np.cos(w*t + np.pi/2), (1.3-r) + r * np.sin(w*t + np.pi/2)])
    # plot.update()
    # plot_.update()
    # plot__.update()

    # calculate the target position from hole to world
    # target_p = np.array([0.0, 0.0, 0.3])
    if FT_sensor_.sense() is not None:
        FT_Current = FT_sensor_.sense()
        FT_Current = np.dot(FT_sensor_.sense(), np.array([[0, 0, -1, 0, 0, 0], [0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0],
                                                          [0, 0, 0, 0, 0, -1], [0, 0, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0]]))
        FT_ = np.vstack((FT_, FT_Current))

        FT_Error = FT_Desired - FT_Current

        FT_Error_correct = np.dot(FT_Error, np.linalg.inv(np.array([[0, 0, -1, 0, 0, 0], [0, 1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0],
                                                          [0, 0, 0, 0, 0, -1], [0, 0, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0]])))
        FT_Error_Tran = np.dot(np.linalg.inv(np.mat(Tran_hole2world)),
                           robot_.get_homogeneous_transform(robot_.get_link_world_positions(6),
                                                            robot_.get_link_world_orientations(6)))[0:3, 0:3]
        XFT_Error_ = np.dot(FT_Error_Tran, FT_Error_correct[0:3])
        XFT_Error = XFT_Error_.A

        pass
    # target_p = target[o, 0:3]/1000
    # target_o = target[o, 3::]

    angle_dmp = np.array([f3(w), f4(w), f5(w)])
    # if FT_sensor_.sense() is not None:
    #     Error_R = matrix_from_casual_axis_angle(FT_Error[3::], K_angle * np.linalg.norm(FT_Error[3::]))
    #     modify_R = np.dot(get_matrix_from_quaternion(robot_.get_link_world_orientations(link_ids=6)), Error_R)
    #     modify_R = np.dot(np.linalg.inv(np.mat(Tran_hole2world)),
    #                                             robot_.get_homogeneous_transform(robot_.get_link_world_positions(6),
    #                                                                             get_quaternion_from_matrix(modify_R)))[0:3, 0:3]
    #     modify_rpy = get_rpy_from_matrix(modify_R)
    #     modify_angle_dmp = rpy_to_x_angle(modify_rpy, np.dot(np.linalg.inv(np.mat(Tran_hole2world)),
    #                                                          np.hstack((robot_.get_link_world_positions(6),
    #                                                                     np.array([1])))).A[0, 0:3])
    #     for l in range(len(modify_angle_dmp)):
    #         modify_angle_dmp[l] = math.radians(modify_angle_dmp[l])
    #     angle_dmp = (angle_dmp + 3 * modify_angle_dmp) / 4        
    target_p = np.array([f0(w), f1(w), f2(w)])
    # aaaa = np.dot(f_stiffness_matrix, XFT_Error.reshape(XFT_Error.shape[0]))
    if FT_sensor_.sense() is not None:
        target_p = target_p + np.dot(f_stiffness_matrix, XFT_Error.T)
    orientation_dmp = angle_to_ZYX_Eular(angle_dmp)
    target_o = orientation_dmp

    w = w + 0.5
    count_num = count_num + 1
    if w >= np.shape(y_)[1] - 1:
        w = 0

    Tran_target2hole = get_homogeneous_matrix(target_p, target_o)

    Tran_target2world = np.dot(Tran_hole2world, Tran_target2hole)  # get the transformation target to the world
    TargetInWorld_p = Tran_target2world[0:3, 3]  # position
    TargetInWorld_o = get_quaternion_from_matrix(Tran_target2world[0:3, 0:3])  # orientation [1*4]

    # q_d_ = robot.calculate_inverse_kinematics(link_id, position=sphere.position, orientation=sphere.orientation)
    # set the target position on the sphere to visualize
    sphere_.position = robot.get_link_world_positions(link_ids=link_id)
    sphere_.orientation = robot.get_link_world_orientations(link_ids=link_id)

    sphere_2.position = TargetInWorld_p
    sphere_2.orientation = TargetInWorld_o

    # get current end-effector position and velocity in the task/operational space
    x = robot_.get_link_world_positions(link_id)
    dx = robot_.get_link_world_linear_velocities(link_id)
    o = robot_.get_link_world_orientations(link_id)
    do = robot_.get_link_world_angular_velocities(link_id)

    # Get joint positions
    q = robot_.get_joint_positions()

    # Get linear jacobian
    if robot_.has_floating_base():
        J = robot_.get_jacobian(link_id, q=q)[:, qIdx + 6]
    else:
        J = robot_.get_jacobian(link_id, q=q)[:, qIdx]

    # Pseudo-inverse: \hat{J} = J^T (JJ^T + k^2 I)^{-1}
    Jp = robot_.get_damped_least_squares_inverse(J, damping)

    # get the linear velocity
    dv = kp_x * (sphere_2.position - x) + kd_x * dx

    # get the angle velocity
    orientation_d = sphere_2.orientation
    orientation = o

    error_quaternion_ = orientation_d.dot(orientation)
    if error_quaternion_ < 0:
        orientation = - orientation

    error_o = quaternion_error(orientation_d, orientation)

    # get the angel velocity
    dw = kp_o * error_o + kd_o * do

    dq = Jp.dot(np.hstack((dv, dw)))

    # set joint positions
    q = q[qIdx] + dq * dt
    robot_.set_joint_positions(q, joint_ids=joint_ids)

    error_position = error_position + np.square(np.linalg.norm(sphere_2.position) -
                                                np.linalg.norm(robot_.get_link_world_positions(6)))
    error_orientation = error_orientation + \
                        np.square(np.linalg.norm(get_rpy_from_quaternion(sphere_2.orientation) -
                                                 get_rpy_from_quaternion(robot_.get_link_world_orientations(6))))

    if w == 0:
        print("position_error_: ", error_position / count_num)
        print("orientation error_: ", error_orientation / count_num)
        count_num = 0
        error_position = 0
        error_orientation = 0
        Task_Over_Flag = True
        break`
TFLQW commented 4 years ago

@lrozo This is the screenshot of the peg-in-hole task, using the admittance control and the desired force profile is [Fx=10, Fy=0, Fz=0], the axis of peg is Fx, 2020-03-19

lrozo commented 4 years ago

Hi,

It is harder to tell what is going on without the fulll code running. However, what I can see at a first glance is that you are transforming a force-error-based command to a position target using:

target_p = target_p + np.dot(f_stiffness_matrix, XFT_Error.T)

After some transformations, you use this to compute the sphere velocity, so I am not really sure what is being implemented here. I may recommend to try:

  1. Use proportional force control along the direction in which you want to control force, and position/impedance control in the other directions. All this implemented in task space.

or

  1. Compute a virtual proxy (position target) based on a mass-spring-damper system attached to the end-effector, which is subject to an external force (the one you wanna track). Given that linear dynamic equation, you can solve for the desired "spring position" (or equilibrium) given accelerations, velocities, applied forces and gains.
TFLQW commented 4 years ago

@lrozo Thanks for your advice, I have try to use an simple case to show my problem. This case utilized kuka iiwa, including three steps. First arriving the initial position, then approach to the table, last step is to touch the table maintaining Fz=10 N, and make a circle trajectory on the table. But the result is the detected Fz is chattring around 500N, but when I change the dt=1/2400, the results have some improvement, but it cannot converge to 10N. So could you find something wrong? I have attach the whole code in .zip. and the gif.

force_control_example.zip

TFLQW commented 4 years ago

@lrozo Maybe I have found the problem, because I didn't set the model stiffness and damping parameters, but when contact with the high stiffness model, the system is unstable, could you give me some advice to improve the system's stability.

lrozo commented 4 years ago

That's really good news! I am curious about your setup and simulation, it'd be great if you contribute to Pyrobolearn by sharing your code as a kind of example, so other people can also make use of it. You can do so by creating a pull request and we will be happy to review it!

Regarding the stability guarantees, this is a common topic in robot control in operational space, or task-space impedance control. You may want to read these papers:

  1. https://ieeexplore.ieee.org/abstract/document/1087068
  2. K. Kronander "Stability Considerations for Variable Impedance Control"
  3. https://www.cambridge.org/core/journals/robotica/article/tutorial-survey-and-comparison-of-impedance-control-on-robotic-manipulation/4C93E5D0778D23E0F9DDDA36E5E86C9E
TFLQW commented 4 years ago

@Irozo OK! Recently, I will organize some related code, and I will create a pull request. I am glad to contribute to the pyrobolearn project!