Closed torydebra closed 3 years ago
I would immediately make a distinction between joints used for robot motion, and a hand. The internal joints of a hand, to me, should be separated from the robot, and should be viewed only as an "implementation detail" of the hand. What's your opinion on this regard @liesrock ?
So my idea on this is that the robot (seen as a collection of kinematic chains composed by joints - fixed, revolute, prismatic... - and links) and its end-effectors should be separated: so inside xbot2 we should have a device (container) for the joints that we use for the motion of the robot itself and some other devices implementing the end-effector capabilities (this can use a different fieldbus e.g. USB).
So I basically agree with @alaurenzi: the joints inside the end-effector should be part of the concrete device implementing the communication with the end-effector and should use/provide the API to the ROS End-Effector (non-RT) on top. The end-effector xbot2 Device can also be RT (e.g. for the HERI II / III mounted on our robots and using the EtherCAT communication).
I close this for "deprecability", it was an old too generic question which I solved
What we need for ROSEE is distinguish between actuated and not actuated joint. So, as very first step I am trying to figure out how to tell xbot2 to not take all the urdf joint, but take only the ones that I say. I am trying playing with yaml config file:
but xbot2 always print all the joints:
I am doing this because I want to use hal2 to communicate with real robot, then I want xbot2 to be aware of only the motors, due to the fact underactuated joints are not controllable neither observable. So my first step was to try move the motors of a hand.
I am also trying to enforcing the joint mapping:
unsuccesfully
PS what is the "type" node in joint_gz field?