fzi-forschungszentrum-informatik / cartesian_controllers

A set of Cartesian controllers for the ROS1 and ROS2-control framework.
BSD 3-Clause "New" or "Revised" License
381 stars 113 forks source link

Applying cartesian_controllers to a robot not support ros_controllers #28

Closed graziegrazie closed 2 years ago

graziegrazie commented 3 years ago

I'm planning to apply cartesian_compliance_controller to Nextage Open, an upper-body humanoid. The robot does not support ros_controllers to access hardware, but supports moveit_commander. In this case, which module of the cartesian_compliance_controller should I change to adapt the robot?

As far as I checked, hardware access to get and set each joint position is performed via a joint handle. So, I guess that if I change the joint handle part with moveit_commander, I can apply the controller to the robot.

Any suggestion is helpful for me. Thank you in advance.

gavanderhoorn commented 3 years ago

moveit_commander talks to MoveIt. MoveIt talks to 'hardware'.

What sort of interfaces does the Nextage Open expose (ie: ROS topics, actions, services, etc)? I feel that would be important here.

graziegrazie commented 3 years ago

Thank you for your comment, @gavanderhoorn

Between MoveIt and hardware, hrpsys_ros_bridge and OpenRTM exist. OpenRTM accesses hardware. Software architecture of Nextage Open is shown here.

Nextage Open exposes three types of interfaces below from this tutorial. I'm sorry for what I said was wrong.

With these interfaces, a user can get and set data such as joint angles and eef's target pose, etc. This time, I don't consider implementing a new RTC.

My plan is that joint angles from one of the interfaces are transformed to joint positions, and vice versa, instead of joint handles. The transformation matrices are calculated from each link length and current joint angles.

Did I answer for your question? @gavanderhoorn.

stefanscherzinger commented 3 years ago

Hi @graziegrazie

Thanks for your interest in these controllers! Your question is interesting, but unfortunately, I have little time at the moment to address this adequately. I'll try to have a look at it as soon as possible.

From the short look I have at this, you could take the part of the _cartesiancontrollers where they write commands to the command buffers and let them speak to your hardware directly. But since ROS controllers are libraries not standalone nodes, you would need some kind of fake RobotHW with an according controller_manager that does the usual read() -> update() -> write() control cycle for you. It would probably be easier to write such a hardware abstraction directly for your robot with the benefit that others could use it with ROS-control. Note that Moveit would still be available via the joint_trajectory_controller's action interface.

stefanscherzinger commented 2 years ago

@graziegrazie

If this is still relevant for you, you could have a look at this controller.. The idea is to connect moveit_commander (or Moveit) to the controller's personal joint_trajectory_controller and turn that into a target pose for the cartesian_compliance_controller.

graziegrazie commented 2 years ago

@stefanscherzinger

Thank you for your suggestions! Yes, this is still relevant for me. I'm sorry for my late response because I am being taken my time for another part in the same project. It is closing, so I will concentrate on this controller soon. I may ask you some questions.

graziegrazie commented 2 years ago

Thanks to @stefanscherzinger's suggestion, I have successfully applied cartesian_motion_controller to Nextage Open.

Next, I'm trying cartesian_force_controller. Then, I have a question about how I should set /target_wrench (= target) and /ft_sensor_wrench (= measurement) ?

I'd like to make the robot imitate a recorded human motion of a roller painting. While recording, the human uses a roller brush with a 6D force sensor. The robot holds the same roller for motion imitation. The target and the measurement are derived from the sensor, and represented on the same frame. As the frame, I assume that end-effector link is used. I hope that , with cartesian_force_controller, the robot moves to make error zero between the target and the measurement.

I changed cartesian_force_controller like the link below. Since I set m_hand_frame_control as false, the target is used as-is. The measurement is also used as-is because transformation in ftSensorWrenchCallback is commented out. Then, an error is computed as the target minus the measurement. With this change, I expected that the robot moves to decrease the error. However, the robot moved independently of the error.

https://github.com/graziegrazie/cartesian_controllers/blob/master/cartesian_force_controller/include/cartesian_force_controller/cartesian_force_controller.hpp

stefanscherzinger commented 2 years ago

@graziegrazie

I have successfully applied cartesian_motion_controller to Nextage Open.

Good to hear that it works!

I changed cartesian_force_controller like the link below

Hm, that's normally not required when applying the controller to a custom robot. Is there a reason why you started to modify on source code level instead of parameterizing the controller with URDF links? Is there some challenge involved that I'm not aware of? Thank's for providing the changes on github, though.

From the use case you described, I think it should be possible to parameterize this accordingly. Note that your URDF link for the sensor must be part of the robotic chain from base to tip, as explained here.

I'll try to provide some additional tips:

my_cartesian_force_controller:
    type: "position_controllers/CartesianForceController"

    # This is the tip of the robot tool that you want to use for your task.
    # In your use case, I would specify a URDF link that coincides with the roller for painting.
    # E.g. the z-axis along the roller axis and the x or y-axis normal to it for applying paint to
    # some to some surface. When you specify a target_wrench, i.e. some additional forces
    # that your robot should apply to its environment, that target_wrench gets applied in this frame.
    # The idea is to specify some pushing force that makes sense for your roller painting tool.
    end_effector_link: "tool0"

    # This is usually the link directly before the first actuated joint.
    # The controller will build a kinematic chain from this link up to end_effector_link.
    # It's also the reference frame for the superposition of error components in all controllers.
    robot_base_link: "base_link"

    # This is the URDF link of your sensor. Sensor signals are assumed to be given in this frame
    # (which is also what you mentioned above). It's important that this link is located somewhere
    # between end_effector_link and robot_base_link. Make sure to specify a chain and see issue #32.
    ft_sensor_ref_link: "sensor_link"

On a higher level: Have you thought about using the cartesian_compliance_controller for that? This controller inherits both the cartesian_motion_controller and the cartesian_force_controller, and offers both interfaces for specifying user targets (motion and force-torque targets). I'm thinking because painting is normally also motion to a big part, and the force component is rather for making controlled contact.

graziegrazie commented 2 years ago

Thank you for your comment, @stefanscherzinger.

Yes, in the end, I will apply cartesian_compliance controller to the robot. As a step for it, I am trying cartesian_force_controller. My kinematic chain was a wrong case in issue #32. I updated URDF as a proper case there. Thank you for your suggestion!

Is there a reason why you started to modify on source code level instead of parameterizing the controller with URDF links? Is there some challenge involved that I'm not aware of? The reason why I started to modify source code is that I could not stop the robot with /target_wrench and /ft_sensor_wrench as I expected. This relates to question 2 below.

Based on your suggestion, please let me update my question like followings.

some to some surface. When you specify a target_wrench, i.e. some additional forces that your robot should apply to its environment, that target_wrench gets applied in this frame. The idea is to specify some pushing force that makes sense for your roller painting tool.

  1. Should I transform /target_wrench from "ft_sensor_ref_link" into "end_effector_link" before publish? /target_wrench can be transformed from "end_effector_link" in computeForceError. In my opinion, it may be good that there is a option to be transformed from "ft_sensor_ref_link" as well. If source frame of following line in computeForceError is parametrized, recorded data can be used without transformation. target_wrench = Base::displayInBaseLink(m_target_wrench,Base::m_end_effector_link);
  1. With examples.launch, could you show me an example of how I should set /target_wrench and /ft_sensor_wrench? I expect if /target_wrench has Fx=2 and others=0 and /ft_sensor_wrench has just 0, the robot moves to x axis to increase Fx in /ft_sensor_wrench. On the other hand, computeForceError adds /target_wrench to /ft_sensor_wrench like below. I don't understand how I should define /target_wrench ad /ft_sensor_wrench. I tried that both of wrenches set Fx=2 and others=0 in simulation, the robot does not stop, but starts to move. In case that Fx=2 in /target_wrench and Fx=-2 in /ft_sensor_wrench as well. I used robot.urdf.xacro in #48.
    return Base::displayInBaseLink(m_ft_sensor_wrench,m_new_ft_sensor_ref)
    + target_wrench
    + compensateGravity();
stefanscherzinger commented 2 years ago

@graziegrazie

Should I transform /target_wrench from "ft_sensor_ref_link" into "end_effector_link" before publish?

No, that's not necessary nor intended. The intended usage is to specify a suitable end_effector_link in the URDF and think in that frame when applying forces and torques through your robot to the environment. If you wish to command in the sensor's frame, then attach end_effector_link to your sensor's link directly with a unit transformation. That's ok for the controller.

With examples.launch, could you show me an example of how I should set /target_wrench and /ft_sensor_wrench?

That's a bit tricky in simulation. Just to be clear. /ft_sensor_wrench is never specified by the user directly. That's what the sensor measures. The user specifies what forces and torques the robot shall apply to its environment (via the end_effector_frame). The best thing you can do in simulation is to steer the robot around with target_wrench to get a feeling of how it responds. But contacts and the computation of the force equilibrium will only make sense on real hardware. Having said that, If you really want to check the controller by hand in simulation, try setting end_effector_frame and ft_sensor_ref_link to be identical. That's a special case, but then it's easier to test with actio = reactio.

graziegrazie commented 2 years ago

@stefanscherzinger

Thank you for your comment. Please let me confirm the premise of cartesian_force_controller.

I expect that cartesian_force_controller moves end-effector so that /ft_sensor_wrench applied to the end-effector is equal to /target_wrench in the end, then the end-effector stops. Is this correct? If I expect wrongly, could you let me know what will happen?

I am so sorry to take your time again and again.

stefanscherzinger commented 2 years ago

@graziegrazie

I am so sorry to take your time again and again.

No problem. You are welcome.

I expect that cartesian_force_controller moves end-effector so that /ft_sensor_wrench applied to the end-effector is equal to /target_wrench in the end, then the end-effector stops. Is this correct?

Yes, that's correct. But I would be more precise: The cartesian_force_controller moves until the /ft_sensor_wrench (displayed in the robot base frame) equals the /target_wrench (also displayed in the robot base frame).

graziegrazie commented 2 years ago

@stefanscherzinger

No problem. You are welcome.

Thank you so much!

Yes, that's correct. But I would be more precise: The cartesian_force_controller moves until the /ft_sensor_wrench (displayed in the robot base frame) equals the /target_wrench (also displayed in the robot base frame).

How can I see that the cartesian_force_controller moves until the /ft_sensor_wrench (not 0 vector) equals the /target_wrench (also not 0) in simulator and robot.urdf.xacro? If it is hard to see it, what is the reason?

stefanscherzinger commented 2 years ago

@graziegrazie

How can I see that the cartesian_force_controller moves until the /ft_sensor_wrench (not 0 vector) equals the /target_wrench (also not 0) in simulator and robot.urdf.xacro?

Just publish two opposite, non-zero forces to confirm this. E.g.

:~$ rostopic pub /target_wrench geometry_msgs/WrenchStamped "header:
  seq: 0
  stamp:
    secs: 0
    nsecs: 0
  frame_id: ''
wrench:
  force:
    x: 0.0
    y: 0.3
    z: 0.0
  torque:
    x: 0.0
    y: 0.0
    z: 0.0"

wait a few seconds and then

:~$ rostopic pub /my_cartesian_force_controller/ft_sensor_wrench geometry_msgs/WrenchStamped "header:
  seq: 0
  stamp:
    secs: 0
    nsecs: 0
  frame_id: ''
wrench:
  force:
    x: 0.0
    y: -0.3
    z: 0.0
  torque:
    x: 0.0
    y: 0.0
    z: 0.0" 

I'm using small magnitudes to let the robot move slowly.

Note that the robot didn't stop in your earlier post

I tried that both of wrenches set Fx=2 and others=0 in simulation, the robot does not stop, but starts to move. In case that Fx=2 in /target_wrench and Fx=-2 in /ft_sensor_wrench as well.

because, the sensor link was not part of the kinematic chain from base to tip. Thanks again for noting and for fixing this in the provided example with #48!

Note, however, that as long as the sensor frame is part of the kinematic chain from base to tip, its relative orientation with respect to the end_effector_link does not matter. They don't necessarily need to have equal orientation as is now the case in the example.

You could, e.g. rotate the end-effector by replacing the current entry with

        <joint name="tool0_joint" type="fixed">
                <origin xyz="0 0 ${link6_length / 2.0}" rpy="0 0 ${pi / 2.0}"/>
                <parent link="sensor_link"/>
                <child link="tool0"/>
        </joint>

Screenshot from 2022-01-23 11-58-29

and repeat the experiment. Note that you must publish to different axis, though, because of the changed orientation.

:~$ $ostopic pub /target_wrench geometry_msgs/WrenchStamped "header:
  seq: 0
  stamp:
    secs: 0
    nsecs: 0
  frame_id: ''
wrench:
  force:
    x: 0.0
    y: -0.4
    z: 0.0
  torque:
    x: 0.0
    y: 0.0
    z: 0.0" 

and

:~$ $ostopic pub /my_cartesian_force_controller/ft_sensor_wrench geometry_msgs/WrenchStamped "header:
  seq: 0
  stamp:
    secs: 0
    nsecs: 0
  frame_id: ''
wrench:
  force:
    x: -0.4
    y: 0.0
    z: 0.0
  torque:
    x: 0.0
    y: 0.0
    z: 0.0"
graziegrazie commented 2 years ago

@stefanscherzinger

Thank you for your examples.

Just publish two opposite, non-zero forces to confirm this. E.g.

I wondered that /ft_sensor_wrench becomes equal to /target_wrench. I am convinced that /ft_sensor_wrench becomes opposite to /target_wrench. It seems that the robot is now controlled with cartesian_force_control as I expected!

Can I consult you about a way of continuous motion control such with force control? cartesian_compliance_controller cannot detect if end-effector reaches a target pose and target wrench by itself. I understand that a user needs to detect its reach and set next target one by one. Are there other choices?

stefanscherzinger commented 2 years ago

@graziegrazie

It seems that the robot is now controlled with cartesian_force_control as I expected!

I'm happy to hear that!

I understand that a user needs to detect its reach

Yes, that's true. The cartesian_compliance_controller is meant to reach an equilibrium between target motion and contact forces/torques. Whether an in-contact motion was successful is up to the user to decide.

and set next target one by one. Are there other choices?

That depends on what you want to do. You can of course publish a stream of target poses, i.e. a moving target, but whether the controller follows depends on your controller parameterization (and the obstacles in your way). There's also a Cartesian trajectory interface that you could use to publish targets from Cartesian trajectory definitions. Like before, those commands will stay open-loop, though.

graziegrazie commented 2 years ago

@stefanscherzinger

Thank you for your suggestion! It is very helpful for me. Cartesian trajectory interface is a pose based control, but I need target wrench as well. So, I will use cartesian_compliance_control.

I may ask you further questions. Please let me keep opening this issue.

graziegrazie commented 2 years ago

@stefanscherzinger

I want to use a torso joint, but not to rotate so much. Mainly I'd like to move each joints in an arm. I am planning to increase a dumping rate of m_current_velocities from 10% to more in getJointControlCmds in ForwardDynamicsSolver.cpp.

If you know more suitable ways, could you let me know?

graziegrazie commented 2 years ago

I checked the code more, then I found that to change m and ip for torso in buildGenericModel seems better. I'm sorry for my poor code reading.

stefanscherzinger commented 2 years ago

@graziegrazie

That's an interesting use case. Unfortunately, there's currently no mechanism exposed that allows users to do that without changing parts of the implementation.

then I found that to change m and ip for torso in buildGenericModel seems better.

Yes, that's the section. Giving earlier links in the kinematics chain more mass and inertia makes them contribute less to the forward simulated motion. It could be an interesting feature for redundant robots to let users specify this effect online, e.g. via dynamic reconfigure. Currently, the objective is to obtain task space linearity, i.e. feedback linearization for the Cartesian control law.

graziegrazie commented 2 years ago

Thank you so much, @stefanscherzinger. Thanks to your help, I did what I hoped. So I closed this issue.