lrozo / CollaborativeTransportation2D

The MATLAB code shows a simple collaborative transportation task carried out on a planar workspace. The models, algorithms and results given in this code are linked to the T-RO paper "Learning Physical Robot Collaborative Behaviors from Human Demonstrations"
6 stars 4 forks source link

What data to collect and in what frames #2

Open omi9y opened 4 years ago

omi9y commented 4 years ago

Hi, I'm trying to replicate this work on pybullet simulation software. At present, I'd made the required environment but the problem I'm facing is to what data I need to collect during kinesthetic teaching. It will be very helpful if you can provide little bit documentation like what is position id, and what this equation "s(n).p(m).A \ (s(n).Data - repmat(s(n).p(m).b,1,s(n).nbData))" represent.

lrozo commented 4 years ago

Hi @omi9y !

Thank you for your message! It is nice to see that you are trying to use this approach. Concerning your questions, the usual process is as follows:

  1. You record kinesthetic demonstrations (recording the end-effector position-orientation).
  2. Then, according to which task parameters you have in your tasks (e.g. position of objects in the workspace), you project the recorded data to these frames (task parameters) by using the affine transformation s(n).p(m).A \ (s(n).Data - repmat(s(n).p(m).b,1,s(n).nbData)). The variable position_id is just a variable to indicate the indexes corresponding to position data in my dataset, but this is specific to my code.

Hope this helps!

omi9y commented 4 years ago

Hi Irozo, First of all thanks for such a quick reply. Actually, I think I have not asked the question properly but still your above reply answered mine partially. Here, I want to understand the data present inside Data.mat file. As when I load this file I'm getting this data s = 1x8 struct array containing the fields p DataP DataF My query is like

  1. what data these variable stores and how we are logging this value from robot. As far as my knowledge I understood that the first value of variable p stores the target position and orientation w r t robot base i.e frame 1 but what are other two values in variable p.

  2. Also, DataP has 6 values in each column so are these joint angles data while traversing the trajectory? But as I went through code only 2 values represent joint position and other 4 values represent velocity and acceleration. So, what these values are since in the paper it is mentioned 7 dof robot is used so definitely these are not joint angles.

  3. And lastly, DataF represent force values in the code but in the paper, it is mentioned that 6-axis force-torque sensor is used but here only two values are present in each column. My thought is that have you used any dimension reduction technique or something else.

Will you please kindly guide on each of these points so that I collect my data for my use from UR10 robot. Thanks!

lrozo commented 4 years ago

First of all, you need to take into account that the example provided in a toy scenario in 2D, which means the data do not correspond to data recorded from a real robot. Here below I answer your questions:

  1. DataP stores position, velocity and acceleration in 2D, that is why each instance is a 6-element vector. This does not correspond to robot joints.
  2. DataP would represent, in a very simplified way, the position of the robot end-effector. However, let me insist that this is a toy example and therefore the data do not come from a real robot in this case.
  3. DataF stores forces, in this case, along the axes x and y, as it is a 2D example. In a more general case, that is, a real robot, this may contain a 3D vector for forces, and another 3D vector for torques (read, e.g., from a 6-axis sensor).

Best,

Leonel

omi9y commented 4 years ago

Hi Leonel,

Sorry to disturb you again but I'm having one query that is this code generalized for 3D position, velocity, and acceleration data because I'm struggling when I'm using 3D data & one more thing like it will be very helpful if you provide any link or resource related to the affine transformation projection technique you used.

Thanks, Omprakash

lrozo commented 4 years ago

Hi,

  1. Yes, the code can work for 3D data. Just be careful with the vectors and matrices dimensionality in the code, it may be that some of them are working specifically for 2D, but this is just a matter of changing the dimensionality to "3".

  2. The affine transformation is mentioned in the paper that is cited in the reference of the code (5th page): https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7450630

Best.

omi9y commented 4 years ago

Thank you.