Hand-in: Apr. 2, 2021, 18:00 CEST
In this assignment, we implement a kinematic walking controller for a legged robot!
Let's see the figure below.
Figure 1: The control pipeline of kinematic walking controller: 1) trajectory planning 2) computing desired joint angles by IK.
We start from five trajectories: 1 for base, and other 4 for feet. We plan the trajectories given target velocity of the robot base (body) and a timeline of the foot contacts (i.e. when and how long the foot is contacted with the ground.) The details of how we plan the timeline of foot contacts, and how we generate the target trajectories, are out of scope of this assignment. But in Ex.3, we will have a sneak peek of trajectory planning procedure for the robot's base.
Our ultimate goal is tracking all of the target trajectories at the same time. We will simplify this problem by assuming the robot's base (somehow...) always perfectly tracks the target base trajectory. Then, we want to find desired joint angles for each leg that allow the foot (i.e. the end effector of individual leg) to reach the target position. We can effectively formulate this as an IK problem. Since the robot has four legs, by solving four IK problems, we can obtain desired configuration of the robot.
Figure 2: Don't freak out!, this is a skeletal visualization of our Dogbot. Assuming the base is at the target position, we want to find desired joint angles for each leg that allow the foot (i.e. the end effector of individual leg) to reach the target position. Note. You can render skeletal view by Main Menu > Draw options > Draw Skeleton.
Once you complete this assignment you should hand in
The grading scheme is as follows
Please leave your questions on GitHub, so your colleagues also can join our discussions.
Okay now let's do this step-by-step :)
In order to formulate an IK problem, firstly, we have to express the positions of a feet as functions of joint angles (and the base position). Formally speaking, given a generalized coordinates vector
which is a concatenated vector of position of robot's base, orientation of robot's base and nj joint angles, we need to find a map between q vector and end effector position pEE expressed in world coordinate frame.
In the previous lecture, we learned how to find this map by forward kinematics.
Code:
src/libs/simAndControl/robot/GeneralizedCoordinatesRobotRepresenetation.cpp
P3D getWorldCoordinates(const P3D &p, RB *rb)
Task:
Details:
GeneralizedCoordinatesRobotRepresenetation
represents the generalized coordinate vector q. P3D getWorldCoordinates(const P3D &p, RB *rb)
returns position of a point in world coordinate frame. The arguments p
is a position of the point in rigidbody rb
's coordinate frame.getCoordsInParentQIdxFrameAfterRotation(int qIndex, const P3D &pLocal)
first: this function returns position of a point in the coordinate frame of the parent of qIdx
after the DOF rotation has been applied.Once you implement getWorldCoordinates
function correctly, you will see the green transparent sphere around the feet of the robots.
Figure 3: Check if your implementation is correct. You should see the green spheres around the robot's feet.
Okay, now we can express the position of the feet as a function of joint angles. It's time to formulate an IK problem: we want to find a generalized coordinate vector qdesired given end effector target position pEEtarget.
In the last assignment, we learn how to formulate the inverse kinematics problem as an optimization problem.
We can solve this problem using gradient-descent method, Newton's method, or Gauss-Newton method. Whatever optimization method you choose, we need a Jacobian matrix of the feet point. Remember, Jacobian is a matrix of a vector-valued functions's first-order partial derivatives.
For now, we will use finite-difference (FD) for computing Jacobian. The idea of finite difference is simple. You give a small perturbation h around jth component of q, and compute the (i,j) component as follows.
Code:
src/libs/simAndControl/robot/GeneralizedCoordinatesRobotRepresenetation.cpp
void estimate_linear_jacobian(const P3D &p, RB *rb, Matrix &dpdq)
Task:
estimate_linear_jacobian
functions that computes a Jacobian matrix of position/vector by FD. Details:
Now, it's time to implement a IK solver. Choose one of optimization methods we learned in the previous lecture: gradient-descent, Neton's, or Gauss-Newton.
We solve four independent IK problems (one for each leg.) Let's say qdesired,i is a solution for ith feet.
We can just solve each IK problem one by one and sum up the solutions to get a full desired generalized coordinates qdesired.
Code:
src/libs/kinematics/IK_solver.h
void solve(int nSteps = 10)
Task:
Details:
gcrr.estimate_linear_jacobian(p, rb, dpdq)
we implemented for Ex. 2-1.Let's see how the robot moves. Run locomotion
app and press Play button (or just tap SPACE key of your keyboard). Do you see the robot trotting in place? Then you are on the right track!
Now, let's give some velocity command. Press ARROW keys of your keyboard. You can increase/decrease target forward speed with up/down key, and increase/decrease target turning speed with left/right key. You can also change the target speed in the main menu.
Oops! The base of the robot is not moving at all! Well, a robot trotting in place is already adorable enough, but this is not what we really want. We want to make the robot follow our input command.
Let's see what is happening here. Although I'm giving 0.3 m/s forward speed command, the target trajectories (red for base, white for feet) are not updated accordingly. With a correct implementation, the trajectory should look like this:
Code:
src/libs/simAndControl/locomotion/LocomotionPlannerHelpers.h
bFrameReferenceMotionPlan
void generate(const bFrameState& startingbFrameState)
Task:
generate
function so that it updates future position and orientation according to our forward, sideways, turning speed commands. Details:
Once you finish this step, you can now control the robot with your keyboard.
By the way, planning the feet trajectories is a bit more tricky. I already implemented a feet trajectory planning strategy in our code base so that once you complete generate
function, the feet trajectory is also updated by user commands. I will not explain more details today, but if you are interested, please read the paper, Marc H. Raibert et al., Experiments in Balance with a 3D One-Legged Hopping Machine, 1984. Although this is a very simple and long-standing strategy, almost every state-of-the-art legged robot still uses this simple heuristic. (Sidenote. Marc Raibert, who was the group leader of Leg Laboratory, MIT, later founded Boston Dynamics Company in 1992.)
From now on, we will improve our kinematic walking controller.
Okay, we compute Jacobian matrix with FD. But in general, FD is not only inaccurate but also very slow in terms of computation speed. Can we compute Jacobian matrix analytically? The answer is yes. With a small extra effort, we can derive analytic Jacobian matrix, and implement this into our code.
Code:
src/libs/simAndControl/robot/GeneralizedCoordinatesRobotRepresenetation.cpp
void compute_dpdq(const P3D &p, RB *rb, Matrix &dpdq)
Test: Compile and run src/test-a2/test.cpp
. Test 4 should pass.
Our robot can go anywhere in the flat-earth world. But, you know, our world is not flat at all. Now, we will make our robot walk on an bumpy terrain. Imagine you have a height map which gives you a height of the ground of given (x, z) coordinates (note that we use y-up axis convention i.e. y is a height of the ground.) To make the robot walk on this terrain, the easiest way is adding offset to y coordinates of each target positions.
You have a full freedom to choose the terrain map you want to use: you can just create a bumpy terrain by adding some spheres in the scene as I've done here. Or you can download a landscape mesh file in .obj format. Please figure out the best strategy to implement this by your own.
Task:
locomotion
app (5%).Hint:
double groundHeight = 0;
in SimpleLocomotionTrajectoryPlanner
. This will merely give the same offset to every target trajectory. We want to give different offset for each individual foot and the base. Congratulations! You can now control the legged robot! Hooray!
Can we use our kinematic walking controller to a real robot? Well... unfortunately it's not very easy. In fact, working with a real robot is a completely different story because we need to take into account Dynamics of the robot. But, you know what? We have implemented fundamental building blocks of legged locomotion control. We can extend this idea to control a real robot someday!
By the way, can you make a guess why using kinematic controller for a real legged robot doesn't really work well in practice? If you are interested in, please leave your ideas on the github issue.