lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
247 stars 27 forks source link

Best way to visualize joint states estimation #730

Open manuelgitgomes opened 9 months ago

manuelgitgomes commented 9 months ago

The bagfiles I have recorded come with the joint states already presented. Because of this, I cannot just change the URDF and the changes will apply when visualizing with playback.launch.

One option is to use the tool created in #691 but to the opposite effect (de-noising the joints). This should work right?

If so, a corrected bagfile should be created automatically after the calibration? (or at least add an option for it)

miguelriemoliveira commented 9 months ago

Hi @manuelgitgomes ,

he bagfiles I have recorded come with the joint states already presented.

This means that you have the /joint_states topic with messages, right?

Not sure about the suggestion. Our output in ATOM is twofold:

  1. a calibrated atom dataset, for evaluation purposes
  2. a calibrated urdf, for ros integration

The calibrated urdf would have the joint bias inserted into the calibration -> rising edge node. Is that not enough, i.e. will robot state publisher not take into account the calibration of the joint to compensate the joint value it reads from the joint state messages?

That would be the best way to proceed.

What you suggest is possible, but if it works like suggested above it would be more clean, no?

manuelgitgomes commented 9 months ago

What you suggest is possible, but if it works like suggested above it would be more clean, no?

As talked previously, it's not possible.

I think it would be helpful to write code for visualization the optimized system. We could republish new joint states when running the playbag of the optimized system, so we can compare old vs new. We might generate a urdf with the pattern there, so we can better see the projection of the expected pattern pose in the images with the detected pattern pose.

miguelriemoliveira commented 9 months ago

Sounds like a good idea. However, about the way to save the calibrated transformation, I think the best is to change the xacro. If you change the xacro you can use it for the bagfile in playbag, or you can use it in the real system.

The solution of changing the joints has the limitation that it will not work on the real system.

I know what to do and I can try to produce a calibrated urdf that also includes the joint calibrations.

About the comparison, we would have to create two systems in one just for visualization in rviz. Is it now what we get during calibration if you use flag

-ipg, --initial_pose_ghost Draw a ghost mesh with the systems initial pose. Good for debugging.

manuelgitgomes commented 9 months ago

I think the best is to change the xacro. If you change the xacro you can use it for the bagfile in playbag, or you can use it in the real system.

I talked with @v4hn and he advised me against it. He said that it will achieve the same results yes, but then the same JointSpace will achieve different robot poses on different URDFs.

The solution of changing the joints has the limitation that it will not work on the real system.

Using the calibration tag will work on the real system, but not on the bag file, so this is why I want to change the joints.

I know that having different solutions for the real systems and for the bag file is not optimal, but it is the way we guarantee standardization.

miguelriemoliveira commented 9 months ago

Hi @manuelgitgomes ,

Sorry but I am still not convinced. So the argument against touching the urdf is that:

the same JointSpace will achieve different robot poses on different URDFs.

Yes it will, but is that is exactly what is intended after a calibration.

You use the uncalibrated URDF, move to joint position J1,J2,J3, the end effector goes to XYZ.

You use the calibrated URDF, move to joint position J1,J2,J3, the end effector goes to X'Y'Z' different from XYZ.

That's exactly what we want and expect after a joint calibration. Perhaps I am missing the point here, but I really think touching the urdf is better than the alternative.

@v4hn, can you try to explain your point of view better?

v4hn commented 9 months ago

I think the best is to change the xacro. If you change the xacro you can use it for the bagfile in playbag, or you can use it in the real system.

I expect you mean "urdf", as the xacro should not be calibrated, but actually loads calibration values from yaml in our case and generates the (calibrated) urdf.

but then the same JointSpace will achieve different robot poses on different URDFs.

The statement should read "the same JointSpace positions will achieve different robot poses on different robots built from the same design/manufacturing process even (or especially) after calibration". The idea (which is followed by the PR2 and others) is that calibration of joint offsets should account for physical errors (i.e., the offset of the zero-crossing of the encoder) that make the robot deviate from an ideal model, such that all modules after calibration can assume that ideal model instead. The great benefit is that you can transfer joint states between PR2/TIAGo/Fetch/Panda Arm number 28 and 865 and it still represents the same physical state. For example some vendors define transport or home configurations for packaging the robots, which should be equivalent on all calibrated robots. Of course this approach is limited to joint offsets as there is no way to compensate for changed link lengths along the arm and pretend the link has a different length. However at least for the PR2 the joint offset is the main source of error and other errors were neglected (please proof me wrong).

Your perspective, in contrary, is one of system identification, where the URDF is supposed to represent the real physical state of the robot. Note that setting a calibration position != 0.0 for where the edge is detected does not contradict this view. It just adds another dof to calibrate an ideal zero position on top.

So I'm aware of at least two ways to adapt calibration values for joint offsets in the URDF. I expect you suggest to modify the joint origin (here the second float in rpy) to account for the calibration delta. The PR2 use a different approach where a calibration tag is added, which is then read by the low-level motor controller which adds the calibration delta transparently before publishing/commanding the joint. Note that the whole ROS system otherwise ignores that calibration tag because it assumes the driver takes care of it; the wiki even documents the value as something that triggers an edge on the hardware (relative to an ideal zero position), not at all relating it to the ideal kinematics model.

On the side, please let me point out that I'm not at all a fan of outputting a calibrated URDF. URDFs are generated from templates (mostly xacro in practice) and I don't want to fiddle with URDF diffs just because I want to regenerate my robot urdf with a new camera mounted somewhere. I just want to adapt the template and regenerate the urdf. So we use a yaml file for the calibration parameters and load that in xacro, adding the values in the correct spots. My dream for an ideal calibration system for URDF would still be to define, e.g., in xacro ${1.57 + calibrate('name_of_dof')} as the value of anything, run calibration on the system and get back a yaml that defines the values of name_of_dof and use that automatically in the template. Just dreaming here :-)

Getting back to the actual issue, I see two solutions, neither of them working in the bag file and the real system.

Looking into it for a moment, I just noticed again that the RobotModel display uses the robot_state_publisher TF, so for this solution you would either switch to MoveIt's RobotState display (which requires a different message) or add another robot_state_publisher with a tf prefix. So all in all the first option is probably easier for visualization, but still useless on the real system for joint offsets.

miguelriemoliveira commented 9 months ago

Hi @v4hn ,

thanks for the detailed comments. My responses inline.

I think the best is to change the xacro. If you change the xacro you can use it for the bagfile in playbag, or you can use it in the real system.

I expect you mean "urdf", as the xacro should not be calibrated, but actually loads calibration values from yaml in our case and generates the (calibrated) urdf.

Yes, I meant urdf.

but then the same JointSpace will achieve different robot poses on different URDFs.

The statement should read "the same JointSpace positions will achieve different robot poses on different robots built from the same design/manufacturing process even (or especially) after calibration". The idea (which is followed by the PR2 and others) is that calibration of joint offsets should account for physical errors (i.e., the offset of the zero-crossing of the encoder) that make the robot deviate from an ideal model, such that all modules after calibration can assume that ideal model instead. The great benefit is that you can transfer joint states between PR2/TIAGo/Fetch/Panda Arm number 28 and 865 and it still represents the same physical state. For example some vendors define transport or home configurations for packaging the robots, which should be equivalent on all calibrated robots. Of course this approach is limited to joint offsets as there is no way to compensate for changed link lengths along the arm and pretend the link has a different length. However at least for the PR2 the joint offset is the main source of error and other errors were neglected (please proof me wrong).

Thanks for the detailed explanation. My feeling is also that the large portion of the errors comes from the joint position biases, but we will investigate the influence of other parameters in the near future.

Your perspective, in contrary, is one of system identification, where the URDF is supposed to represent the real physical state of the robot. Note that setting a calibration position != 0.0 for where the edge is detected does not contradict this view. It just adds another dof to calibrate an ideal zero position on top.

So I'm aware of at least two ways to adapt calibration values for joint offsets in the URDF. I expect you suggest to modify the joint origin (here the second float in rpy) to account for the calibration delta.

Yes, the second one because the joint with axis is 0 1 0. Note that and we could also modify one of the xyz, in the case of calibrating a prismatic joint and estimating a linear bias.

The PR2 use a different approach where a calibration tag is added, which is then read by the low-level motor controller which adds the calibration delta transparently before publishing/commanding the joint. Note that the whole ROS system otherwise ignores that calibration tag because it assumes the driver takes care of it; the wiki even documents the value as something that triggers an edge on the hardware (relative to an ideal zero position), not at all relating it to the ideal kinematics model.

We saw this rising and falling nodes in the calibration node of the joint urdfs. I think the limitation for those is that they address only the position bias, where we could calibrate the other parameters as well. Also, if we added the calibration estimated biases to the joint->calibration->rising node, we need to run the live system (with the motor drivers included) to put this compensation into effect. Alternatively, we can change the joints in the bagfile as @manuelgitgomes suggested, but that sound more complex.

On the side, please let me point out that I'm not at all a fan of outputting a calibrated URDF. URDFs are generated from templates (mostly xacro in practice) and I don't want to fiddle with URDF diffs just because I want to regenerate my robot urdf with a new camera mounted somewhere. I just want to adapt the template and regenerate the urdf. So we use a yaml file for the calibration parameters and load that in xacro, adding the values in the correct spots. My dream for an ideal calibration system for URDF would still be to define, e.g., in xacro ${1.57 + calibrate('name_of_dof')} as the value of anything, run calibration on the system and get back a yaml that defines the values of name_of_dof and use that automatically in the template. Just dreaming here :-)

That is a very nice idea, and I see why you don't want to change the urdf. It does make the xacro more complicated, tough, and I am not sure how many people have these parameterized xacros for their robots. I remember saw these for the universal robots ... but I am not sure everyone will have such evolved xacros. However, we can, in addition to the calibrated urdf, produce a yaml with the calibrated values for the optimized parameters. That should be relatively easy, and perhaps you can plugin it in directly into your yaml based mechanism.

Getting back to the actual issue, I see two solutions, neither of them working in the bag file and the real system.

  • provide/upload an adapted URDF, e.g., as calibrated_robot_description, with changed joint origins and add a RobotModel to RViz that uses it, or
  • add another joint_states topic which adds the offsets and use that for visualization.

Right, the first hypothesis is inline with what I was suggesting, the second more aligned with what @manuelgitgomes was suggesting. There are two main reasons why I like the first better.

The first reason is because it should work for the calibration of all joint parameters, whereas the second hyphotesis is limited to the joint position bias parameters.

The second reason why I like the first hypothesis better is that I think it will work for the bag_file and the simulated system as well, because of how ATOM plays the bag files. Let me try to explain:

ATOM remaps the tf and tf_static topics in a way that they discarded, and launches a robot_state_publisher, which will read the calibrated urdf. The joint state messages are left unchanged, so they will continue to have errors. Also, the idea is not to use rising compensation.

With this information, robot state publisher will produce transformations on topic /tf and /static_tf which are now calibrated, because they would use the calibrated urdf.

The mechanism we have in ATOM to inspect after calibrating a system is quite simple: we just have to run the playbag.launch with optimized:=true so that the calibrated urdf is used, and we can see the bagfile running with the calibration on top.

The interesting part is that we can launch the live system with exactly the same mechanism and it will also run with the calibration on top.

Looking into it for a moment, I just noticed again that the RobotModel display uses the robot_state_publisher TF, so for this solution you would either switch to MoveIt's RobotState display (which requires a different message) or add another robot_state_publisher with a tf prefix. So all in all the first option is probably easier for visualization, but still useless on the real system for joint offsets.

I think this question is answered above with our playbag mechanism.

Note that we already produce a calibrated urdf (which you can just discard if you don't want to use it), so it makes sense to include the joint calibration in this urdf.

Thus, my suggestion is that @manuelgitgomes proceeds with the second hypothesis (changing the joint states), and I will try to develop the calibrated urdf when I find some time.

Then we can also think about producing that yaml of the calibrated parameters ...

manuelgitgomes commented 9 months ago

Hello to both.

Yes, I meant urdf.

This happens a lot in our framework, as @v4hn as brought to my attention, that is why I created #774.

The first reason is because it should work for the calibration of all joint parameters, whereas the second hyphotesis is limited to the joint position bias parameters.

The thing is that all the other joint parameters do not have another standard place to be, whereas these joint offsets have, just like @v4hn explained.

The second reason why I like the first hypothesis better is that I think it will work for the bag_file and the simulated system as well, because of how ATOM plays the bag files.

Yes, I believe so as well.
I conceptually see no issue on why it would not work. I reiterate that it will not be standard, so this bothers me.

With the issue I presented in #773, we might need to generate a new bag file nevertheless, so why not just change the joint states there?

miguelriemoliveira commented 8 months ago

The thing is that all the other joint parameters do not have another standard place to be, whereas these joint offsets have, just like @v4hn explained.

I reiterate that it will not be standard, so this bothers me.

I understand that is the standard, and we would not be following it. But the fact is that the standard does not work for everything we need, it only supports joint position bias, while we estimate more parameters.

With the issue I presented in https://github.com/lardemua/atom/issues/773, we might need to generate a new bag file nevertheless, so why not just change the joint states there?

I agree. It makes sense to think of a script to calibrate (or to apply the results of a calibration) to a bagfile. That would correct the joint states, the rgb intrinsics, and the transforms I am not sure because sometimes

As I said before, you should work on this bagfile calibration script.

From my side, I still think that for many cases (considering the goal of broadening the usage of ATOM) the production of a calibrated urdf might be usefull, so I will work on it. And when I have some spare time I will implement the automatic creation of a calibrated params file as a Christmas present to @v4hn : - ).