Closed christian-rauch closed 2 years ago
By comparison:
Gazebo demo roslaunch nextagea_gazebo nextagea_world.launch
:
$ rosrun tf tf_echo nextage_base CHEST_JOINT0_Link
At time 105.867
- Translation: [0.000, 0.000, 0.267]
- Rotation: in Quaternion [0.000, 0.000, 0.001, 1.000]
in RPY (radian) [0.000, -0.000, 0.002]
in RPY (degree) [0.000, -0.000, 0.096]
PyBullet demo roslaunch rpbi_examples basic_example_nextage.launch
:
$ rosrun tf tf_echo rpbi/kuka_lwr/nextage_base rpbi/kuka_lwr/CHEST_JOINT0_Link
At time 1654112998.782
- Translation: [0.190, 0.000, 0.902]
- Rotation: in Quaternion [0.000, 0.000, -0.018, 1.000]
in RPY (radian) [0.000, 0.000, -0.037]
in RPY (degree) [0.000, 0.000, -2.117]
Why is the torso frame (CHEST_JOINT0_Link
) shifted much more from the base (nextage_base
)? It's not just scaling, but also there is an additional translation in x-direction.
Hmm, this is really strange, I don't have an explanation - I've not seen anything like this on any other robot. I can only imagine there is some issue with the URDF.
@namikosaito this might be the source of the issues you have been seeing re the magic_offset
. @christian-rauch for context, in @namikosaito's teleop experiments the IK gives very strange results, the only (hacky) workaround we found was to add an offset to the target frame in the z-direction. I saw this same issue using EXOTica and my own IK package, but (as above) haven't seen anything else like it with other robots - hence my suspicion about the URDF.
@christian-rauch can you estimate the offset? I wonder if it corresponds with the magic_offset
@namikosaito sees.
The offset we have is magic_offset = [0., 0., 0.2]
in the frame CHEST_JOINT0_Link
.
With regards to the interface, there is no part where an offset is added. Only when you specify a transform between rpbi/world
and the URDF base frame. However, this wouldn't cause the issue you're seeing.
@christian-rauch can you estimate the offset? I wonder if it corresponds with the magic_offset @namikosaito sees.
The offset in which transformation? Between which frames is the offset? Is it from CHEST_JOINT0_Link
to HEAD_JOINT0_Link
?
$ rosrun tf tf_echo CHEST_JOINT0_Link HEAD_JOINT0_Link
At time 19.359
- Translation: [0.000, 0.000, 0.302]
- Rotation: in Quaternion [0.000, 0.000, -0.000, 1.000]
in RPY (radian) [0.000, 0.000, -0.000]
in RPY (degree) [0.000, 0.000, -0.000]
$ rosrun tf tf_echo rpbi/kuka_lwr/CHEST_JOINT0_Link rpbi/kuka_lwr/HEAD_JOINT0_Link
At time 1654164555.189
- Translation: [-0.000, 0.005, 0.102]
- Rotation: in Quaternion [0.000, 0.000, 0.001, 1.000]
in RPY (radian) [0.000, -0.000, 0.003]
in RPY (degree) [0.000, -0.000, 0.157]
The offset of transformation from CHEST_JOINT0_Link
to HEAD_JOINT0_Link
is about +[0, 0.005, 0.2]
, i.e. 5mm in y-direction and -20cm in z-direction.
If we have these large offsets, how did we manage to run the Nextage demos? :-)
I can only imagine there is some issue with the URDF.
What in the URDF could cause this? I never had issues using the URDF in Gazebo, the real robot or rendering the URDF in OpenGL for depth-cloud filtering.
It would be good if we could find a minimal reproducible URDF that causes this issue.
It's not only the Nextage, the Kuka has the same issue:
roslaunch rpbi_examples basic_example_kuka_lwr.launch
This is with https://github.com/cmower/ros_pybullet_interface/pull/142 which sets the robot_description
via rpbi_examples/robots/kuka_lwr.urdf
.
So, maybe i am getting confused with the images, the robot links are in the right place (as in URDF) but there is some offset between the RGBD and robot visualization in rviz?
In this case, perhaps there is something wrong with the camera intrinsics or how the data is being passed to the projection? When you do the projection using the interface, that you implemented, does it resolve issue? Or is the issue still there?
No, the robot links, as shown by the meshes in RViz, are clearly in the wrong place. This is independent of the camera, which only produced the point cloude.
Is it possible that the interface is not publishing all the links in the URDF? It's possible it is missing the fixed joints, and perhaps that is causing rviz not to render some of the meshes.
The "observed" robot links (see the blue points for example) show the links in the correct place. But as far as I can see, the published frames are in the wrong place, hence the links are shown in the wrong place.
So, as far as I know, Pybullet was originally developed for people in graphics but now is used by robotics people, do you think it's possible that there are some conventions that don't cross over? Otherwise, perhaps there is something not set correctly when calling loadURDF
. (I'm clutching at straws)
There is nothing in the interface that adds offsets to links. Can you push the code that produces this issue to a new branch so I can run locally.
Well, I am pretty sure it is not related to the camera image reading. It is simply that the transformations seem wrong. Hence, the links appear in a different place. It's easy to verify this just by tf_echo
, which should show the same transformations as Gazebo.
I noticed that the transformations are not published in a chain/tree (e.g. A->B->C) but as a flat hierarchy with every frame given with the world as a parent (A->B, A->C). This makes it a bit hard to detect which of the transformations is affected. Is there a way to publish the chain similar to how Gazebo does it?
@cmower You can use https://github.com/cmower/ros_pybullet_interface/pull/142 to visualise the frames and robot model in RViz.
I simplified the Kuka LWR to two links:
https://user-images.githubusercontent.com/8226248/171914770-b399f95f-b394-4895-a3e1-d2a193b94daa.mp4
You can clearly see that the pivot point is not the origin of the coordinate frame but some other frame.
@cmower TL;DR: If I remove the origin
from the inertial
in the URDF (i.e. set it to the coordinate frame origin), then the kinematic tree renders correctly:
Potentially the issue is coming from here.
Potentially the issue is coming from here.
Good hunch. The return values from getLinkStates
are wrong. This is using linkWorldPosition
and linkWorldOrientation
(i.e. values 0 and 1), but it should have been worldLinkFramePosition
and worldLinkFrameOrientation
(i.e. values 4 and 5). This is fixed in PR #142.
Does this fix the issue @christian-rauch ?
Does this fix the issue @christian-rauch ?
Yes, see https://github.com/cmower/ros_pybullet_interface/pull/142#issuecomment-1149010055. The GitHub issue fill be closed automatically once this has been merged.
I started
roslaunch rpbi_examples basic_example_nextage.launch
and visualise the frames and the robot model (with tf prefixrpbi/kuka_lwr
) in RViz:The visual links are correctly attached to the frames, but their relative distance seems off:
Here, the
HEAD_JOINT0_link
, which is the first link for the head chain, is inside the torso. Hence, the torso collides with the head.