Open vigisushrutha23 opened 1 month ago
What we do is load the the URDF model using idyntree and then update the internal model of the robot with actual joint values using the read_encoders. The grasp constraint is that the object is always attached to both arms. We do our constraint calculations to ensure that once the object is grasped, the distance between then left and right palm is maintained. Then we have the cartesian controller to control the pose of the object. The initial issue we faced is that over time, while the horizontal distance (y-axis) did not drift much, the relative x,z between the left and the right palms drifted a lot over experiments lasting > 2 min or 10 bimanual actions. I was naively hoping that the solution to the wrist value mismatch presented in the issue #261 would solve our issue fully but that was not the case. After some initial condition and reset condition modding, we are now currently in the following state.
https://github.com/user-attachments/assets/064ae13f-8d52-492c-814b-b2a5b95416f5 We perform a bunch of manipulations starting from a central rest pose. Rotate right, left,, reset to centre, right again, reset lower and reset. So technically the reset pose should have the robot always getting back to the inital central pose. But if you observe closely, the height of the hands in the reset pose keeps increasing.
Right now as you can see the horizontal constraint (y-axis) drifts very less but the x and z do drift (approx 8-9cm ), when theoretically they are supposed to stay the same over time due to the constraint we enforce.
Here i decided to take the plots of the palms w.r.t to the torso to see if they drift w.r.t to the base in a similar manner as this can be useful to maintain the quality of the grasp. In the position plots, blue is the left and green is right palm w.r.t torso. As you can see the drift of the palm height w.r.t to the torso seems be lower than the left-right hand relative height drift. Same for the x-direction as well. As you can see in the video the hands keep going higher and this is something not sustainable for up in precise grasping and manipulation. This mismatch could be due to the orientation changes something we don't include in our constraints.
I am trying to eliminate any other cause for error before completely focusing on refactoring our object/grasp constraints. So to that extent I would like to know if something like mentioned in the comments like below will potentially cause inaccuracies in updating our internal model for our algorithm.
It is important to notice that this problem does not appear in simulation.
For the records, in simulation the wrist has not any coupling involved, it is composed by the relative serial mechanism
Just a few comments:
You might want to compute the distance and difference in angle between the hands. There should be code that computes this already:
yarp::sig::Vector &constraintData = this->constraintAdherence.prepare();
constraintData.clear();
Eigen::Vector<double,6> temp = pose_error(this->leftPose, this->rightPose); // Get the pose error between the hands
constraintData.push_back(temp.head(3).norm()*1000); // Position error in mm
constraintData.push_back(temp.tail(3).norm()*180/3.141592); // Orientation error in deg
The fact that it works in simulation perfectly, but drifts on the real robot indicates a hardware problem. We had issues with inaccurate joint tracking in the past. Is it the same case here?
Watch out for near-singular configurations. In the test videos for the ICRA submission I noticed the right arm reaching singular configurations. I have a control barrier function to help prevent it, but it's not 100% perfect. As a backup, I use damped least squares, but this adds an error term which can cause the grasp constraint to deviate.
The biggest issue I faced when working with the i/ergoCub is that, since the joints run in position mode, I had to use a virtual joint state for the internal calcs, which are then passed as a reference position to the motor controllers. This means my controller has no idea if the real robot is performing as expected, and cannot correct for it. The motors need to perform perfectly.
I re-set the virtual position every time a new action is called. If the problem persists, we might have to write some sort of grasp correction term. It's not pretty, but I can't think of much else without proper full state feedback :man_shrugging:
So i made a simple change to the code to check if the the commanded object pose was same. In that case I don't stop the thread to re-initiate the trajectory generation. Below are the plots.
Observe the drift is much lesser when compared to https://github.com/hsp-iit/ergocub-bimanual/issues/6#issuecomment-2459567453.
Once again better performance i think w.r.t https://github.com/hsp-iit/ergocub-bimanual/issues/6#issuecomment-2459567453 based on the scale of the plots alone.
Just a few comments:
* You might want to compute the distance and difference in angle between the hands. There should be code that computes this already:
yarp::sig::Vector &constraintData = this->constraintAdherence.prepare(); constraintData.clear(); Eigen::Vector<double,6> temp = pose_error(this->leftPose, this->rightPose); // Get the pose error between the hands constraintData.push_back(temp.head(3).norm()*1000); // Position error in mm constraintData.push_back(temp.tail(3).norm()*180/3.141592); // Orientation error in deg
* The fact that it works in simulation perfectly, but drifts on the real robot indicates a hardware problem. We had issues with inaccurate joint tracking in the past. Is it the same case here? * Watch out for near-singular configurations. In the test videos for the ICRA submission I noticed the right arm reaching singular configurations. I have a control barrier function to help prevent it, but it's not 100% perfect. As a backup, I use damped least squares, but this adds an error term which can cause the grasp constraint to deviate.
The biggest issue I faced when working with the i/ergoCub is that, since the joints run in position mode, I had to use a virtual joint state for the internal calcs, which are then passed as a reference position to the motor controllers. This means my controller has no idea if the real robot is performing as expected, and cannot correct for it. The motors need to perform perfectly.
I re-set the virtual position every time a new action is called. If the problem persists, we might have to write some sort of grasp correction term. It's not pretty, but I can't think of much else without proper full state feedback 🤷♂️
I already get the hand poses from ROS so at the moment doen't need the calculation. Secondly, I did experiments with easier config as seen in the video so i don't think singularity is the issue. The issue is as you say either hardware, difference in internal and robot model and inaccuracies in object pose estimation. The estimation seems to be weird as when I was giving reset command multiple times, the estimated object pose remains the same internally but the actual robot moves the object higher as more reset commands were sent eventhough it was just supposed to maintain the same pose. This is why i implemented the pose change check to not send too many similar commands and that seems to have helped. Although the drift should have been there in the first place.
Over long experiments (> 5 object pose manipulations) the right hand drifts in the x and z directions. In the branch all_nav_stack_changes we are making the changes that work with the human avoidance module as well as improve the performance over time to be stable to ensure the object is not dropped due to the drift. This issue will track the progress