HaddingtonDynamics / Dexter

GNU General Public License v3.0
363 stars 84 forks source link

Moving the robot through a series of cartesian waypoints to follow the trajectory with an overall cartesian trapezoidal velocity profile. #114

Open JamesNewton opened 2 years ago

JamesNewton commented 2 years ago

Moving the robot through a series of cartesian waypoints to follow the trajectory with an overall cartesian trapezoidal velocity profile.

The key is the generation of ideally spaced (or timed) interim points from the gross waypoints and velocity. It is my understanding that ROS provides this functionality via the Move it package? I am not a ROS expert. I'm not aware of a function outside ROS to make these ideally spaced interim points from a set of waypoints with velocity ramping in cartesian space. This is not provided on the robot or in DDE.

The best model for this which I have seen outside ROS is PhUI2RCP which records points during follow mode and plays them back from a recorded array. The specific routine is replayPointsitr https://github.com/HaddingtonDynamics/Dexter/blob/3fd92e25c66536585abd3cdd65da8a5fb88d8ed3/Firmware/dde_apps/PHUI2RCP.js#L320

The limitation of this is that the points are already broken up into a series of very close interim position, rather than being distant waypoints because they are recorded during the movement of the robot arm by a human. These positions are recorded in joint space, not cartesian space, and they are distant from each other in space according to the joint velocity between points.

In our DDE IDE program, there is a routine called Kin.move_to_straight https://github.com/cfry/dde/blob/16c6f13ee091351cc8973c088f3d49d4ffb270f2/math/Kin.js#L1080 which takes a starting and ending XYZ position and orientation and breaks those up into a series of small goals. The limitation of this routine is that it does not apply any velocity ramping. e.g. the interim goals are equidistant when they should change in distance relative to velocity assuming they will be set as the goal on equal time divisions. E.g. assuming each interim goal will be sent to the robot at a constant pace, the points near the starting and stopping point should be closer together and the points near the middle should be farther apart.

I would suggest looking to the ROS documentation for a means of generating these interim points.

Once you have this ideal set of interim cartesian points, you can send 'P' oplets to drive the joints smoothly through that path, just as is done in PhUI2RCP. You will find that the arm will lag slightly when the joint velocity increases, but you can (somewhat) compensate for that by gain scheduling the PID_P term: Increase the PID drive when the joint is moving a large distance between two points (to avoid lag), and decrease it when the distance is small (to avoid oscillations).

@ramamoorthyluxman @sflc2 @ellisrgrz

HerringTin commented 2 years ago

I'm attempting to integrate Dexter with Moveit currently, but am newish to ROS and Moveit (don't take what I'm saying as gospel).

I'm working towards a basic implementation that uses the generic FollowJointTrajectory controller by making a dexter hardware interface like this. It provides joint positions inside a control loop for making a hardware interface. At the moment I'm using 'a' oplets but I suppose 'P' would work to. Outside of the hardware interface I'm more or less following this part of the Moveit tutorials.

The only issue is that the generic controller tries to do its own PID control using feedback from the robot. I can get around this passing the commanded positions through to the feedback, but then you wont be able to watch the robot in real time in rviz. I think a proper solution would be re-implement the FollowJointTrajectory controller.

JamesNewton commented 2 years ago

@HerringTin Our attempts to work with ROS have met with horror and loathing. I'll add a few thoughts about a non-ROS solution:

  1. The key is to write a function which divides the waypoints up into a series of little interim points. At first, you can just use equal spacing. Then later, you can try to apply velocity shaping to make the distance between the points change as you speed up and slow down.

  2. Once you have that function, then converting from XYZ to joint space on each of those points, and sending P moves will get you a (very) basic motion.

  3. Next you can tune that by scheduling PID gain according to joint angle change; larger changes with higher gain, smaller changes with smaller gain and this will help to further smooth out the motion while also keeping the robot from lagging behind the way points.

So... for number 1; to divide up each straight line move into little interem points, we need to know the distance between the points. The move has a starting X1 Y1 Z1 and an ending X2 Y2 Z2 so it's a 3D vector. We can normalize it by finding the deltas: dX = X2 - X1 and so on. And we should find the rotation of it from the vertical plain about the Z axis (azimuth) and the rotation of it up off the horizontal axis about the Y axis (elevation).

In DDE, evaluate Vector.distance. If you pass in two arrays, each with three numbers which are the X, Y, Z of the point, then it returns the distance between them. That appears to boil down to a call to Math.hypot with three values, which are the deltas in each dimension (e.g. X2-X1, Y2-Y1, etc...). Turns out the distance between a pair of 3D points is just good old Pythagoras but with 3 lengths. Who knew? ,o)

To start, just divide that distance into equal segments. Next is just three variables; acc, vel, pos and on each time time around, vel += acc dt, pos += vel dt. In either case, we get a percentage of that original vector hypotenuse and we need to convert the pos back into XYZ.

So to do that, we can just follow these steps:

Number 2 is a solved problem. DDE has a version of it that doesn't take into account the DH calibration, but it's good enough for now. It's in the Kin class, and is available in a few different interfaces, e.g. inverse_kinematics(xyz, direction = [0, 0, -1], config = [1, 1, 1], dexter_inst_or_workspace_pose = Vector.make_pose()) but you should look through all the functions in that class. (see the wiki page on Kinematics for terms).

Number 3 is a new thing. The PID control system obeys max speed, but the rate of movement is mostly proportional (who would have guessed) to the error. So it's common for the arm to actually lag the goal, and round off corners, if you don't stay there long enough. I'm of the opinion that we can "schedule" the PID gain dynamically and avoid a lot of that. The PID gain multiplies the positional error to generate corrective motion.

These PID gains are normally set low (e.g. 0.2) because if they are set to high, the joint will oscillate around the goal point. A low gain gets there eventually, and avoids that oscillation, but it lags. A high gain gets there right now, stays tight on the goal, but will oscillate if it overtakes the goal. Scheduling those gains high when the goal point is going to move anyway (before it has a chance to overtake it) and then dropping them back low when we start moving (for a gentle start) and when we come in for a landing at the end may just be the trick.

Someone with programming skill and more time than I have should try this.

HerringTin commented 2 years ago

My goal is actually full Moveit integration rather than this movement specifically. Maybe I should open a new issue for that?

Ive made some progress and have a working but super basic JointTrajectoryController with no feedback. I'm trying to read joint positions with the 'r' oplet and the #measured_angles keyword. Is there perhaps an existing c++ struct to hold the response TCP packet's fields, and a function to parse the packet into said struct? I'm guessing there's something in the DDE source but I haven't found it yet.

JamesNewton commented 2 years ago

My goal is actually full Moveit integration rather than this movement specifically. Maybe I should open a new issue for that?

Please do! It's a worthy action and we would love for you to document your progress and get answers related to it on your own issue thread.

Ive made some progress and have a working but super basic JointTrajectoryController with no feedback. I'm trying to read joint positions with the 'r' oplet and the #measured_angles keyword. Is there perhaps an existing c++ struct to hold the response TCP packet's fields, and a function to parse the packet into said struct? I'm guessing there's something in the DDE source but I haven't found it yet.

The structure is too simple to worry about. It's just an array of 60 32 bit integers, nothing more. The mapping of those is shown on: https://github.com/HaddingtonDynamics/Dexter/wiki/status-data If you look into DexRun.c you will find that the order and values returned are defined in the array StatusReportIndirection, so you can update that to get back whatever you want.

The function ProcessServerSendDataDDE stuffs the returned char buffer, char *sendBuff with those ints via int *sendBuffReTyped; being pointed to the same memory in the line: sendBuffReTyped=(int *)sendBuff;

the values are actually pumped in via the code:

        for(i=0;i<59;i++)
        {
            sendBuffReTyped[i+6]=getNormalizedInput(StatusReportIndirection[i]);
        }
        return TRUE;

https://github.com/HaddingtonDynamics/Dexter/blob/Stable_2020_02_04_ConeDrive/Firmware/DexRun.c#L2975 and getNormalizedInput processes the raw data from the FPGA into more meaningful units. E.g. from stepper position count into arcseconds via the jointscal array.

So you can easily write your own C++ code via Union or just pointer of a uint32_t array into the returned buffer, and you can easily modify DexRun.c to stuff that array with any values you like... including the Stepped Angles if you want. BASE_STEPS, PIVOT_STEPS, etc...