baxter-flowers / promplib

Python implementation of Probabilistic Motor Primitives including a ROS overlay to be used with JointTrajectory, JointState and RobotTrajectory, RobotState (Move It) messages
GNU Lesser General Public License v3.0
26 stars 11 forks source link

Are there simple codes of ProMP? #1

Open chenlx92 opened 7 years ago

chenlx92 commented 7 years ago

It's a little bit hard to execute the code with different robot, cause the robot model is incorporate into ProMP. Is it possible that there are simple codes without any robot information? I will be appreciated for your reply. --Chen Longxin

ymollard commented 7 years ago

It's also a little bit hard to interface something with any robot since it assumes that we have an abstract representation of any robot, which is hard to get since all robot can be pretty different and makes code heavy.

However as said in the README, the only thing that you need to change is the Kinematics interface, which we packaged in a single file. Keep the two classes and method signatures, and throw away their implementation which is Baxter-specific.

Scripts and notebooks obviously talk to your robot and only you know how to move it, so you cannot directly execute them, but the core implementation of ProMPs is Baxter-free, except Kinematics as said hereabove.

Note: import baxter_commander.persistence is a bit tricky, they are just ROS-classes-to-python-dict and python-dict-to-ROS-classes helper functions you can pick up from here.

chenlx92 commented 7 years ago

Thank you very much. Your answer is very helpful for me. I think in the beginning I could set the param arm as ' '. The code about InteractiveProMP is implementation of Phase Estimation for Fast Action Recognition and Trajectory Generation in Human-Robot Collaboration, right? I'm interested in that why didn't you choose NDPRoMP as the base of class InteractiveProMP, while rewriting the class QCartProMP.

ymollard commented 7 years ago

"Q + cartesian" primitives and "N-Dimensional" primitives are two different things. The first allows to pick goals in cartesian space, the second in joint space. Depending of your usage your must select the primitive your need: QCartProMP handles most casual picking situations when the object pose is given in cartesian space, NDPRoMP is much more precise but requires to call in IK first since it only accepts joint space targets. On top of one of them, InteractiveProMP segments the space to handle a bunch of "local primitives".

The arm parameter allows to select the right kinematics chain. Using a fake kinematics will have no impact on NDPRoMP but that won't work with QCartProMP.

This implementation is closer to this.

chenlx92 commented 7 years ago

I have question about the ProMP. Can the primitives such as NDProMP here be stored in cartesian space directly? Maybe the positions are OK and the orientations should be described properly, such as by quaternion or Euler angle. Is this feasible?

ymollard commented 7 years ago

Yes, the ROS overlay uses NDProMP to store cartesian motions here.

To my experience quaternions behave well with probabilistic motor primitives, although you might want to re-normalize them after a trajectory generation.

Cartesian space MPs, though, have the drawback of requiring an IK solver that minimizes joint configuration jumps. If there are still jumps that make motion not executable in real life, then you can refine the trajectory, but the final end effector precision will be impacted, so that's a trade off.

chenlx92 commented 7 years ago

I am thinking whether we can solve IK problem by MoveIt! in which the obstacle-avoidance is also available.

ymollard commented 7 years ago

You can solve static positions through Moveit easily with this service. However, the ProMP will ignore collisions, you can rely on MoveIt to tell you whether the output trajectory has collisions but that won't be easy to recover in that case.

chenlx92 commented 7 years ago

When I looked inside the code, I met a problem. The relation between class NDProMP and ProMP is composition. I wonder if it's reasonable, cause for now, the joint predict distribution is updated respectively when observation is updated. So the correlation among joints is ignored, right?

ymollard commented 7 years ago

In NDProMP the correlation is ignored, indeed. The correlation is taken into account in QCartProMP, but they're planned in task space, via the context.

There's no implementation of correlated joint-space ProMP at the moment in this repository. The current implementation of NDProMP (composition of independent 1D MPs) is trivial, it could be worthy to reimplement it with similar maths as in QCartProMP but keeping it in joint space. Let's poke @gjmaeda to ask if he knows the relevant papers.

gjmaeda commented 7 years ago

Hi,

Check the code section of this page under the subsection "Interaction ProMP". There is code there that correlates 4 degrees-of-freedom. http://www.ausy.tu-darmstadt.de/Team/GuilhermeMaeda

Hope this helps, Guilherme

On 24 April 2017 at 14:35, Yoan Mollard notifications@github.com wrote:

In NDProMP the correlation is ignored, indeed. The correlation is taken into account in QCartProMP, but they're planned in task space, via the context.

There's no implementation of correlated joint-space ProMP at the moment in this repository. The current implementation of NDProMP (composition of independent 1D MPs) is trivial, it could be worthy to reimplement it with similar maths as in QCartProMP but keeping it in joint space. Let's poke @gjmaeda https://github.com/gjmaeda to ask if he knows the relevant papers.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/baxter-flowers/promplib/issues/1#issuecomment-296650540, or mute the thread https://github.com/notifications/unsubscribe-auth/AGesT0CP5h68WikkKwdgnJJjBBwIDzliks5rzJcogaJpZM4MbUpT .

chenlx92 commented 7 years ago

Thanks.

About Interaction ProMP, I have some questions. @gjmaeda

In my opinion, the correlation among joints is captured into the covariance of weight, which will be used in Kalman Filter when observation of human motion is updated. This will result in holding the similar shape of inferred joint motion (including the observed and unobserved joints) to the training set. I don't know if it's the correct understanding. Correct me if I'm wrong.