AutonomyLab / ardrone_autonomy

ROS driver for Parrot AR-Drone 1.0 and 2.0 quadrocopters
http://wiki.ros.org/ardrone_autonomy
BSD 3-Clause "New" or "Revised" License
356 stars 226 forks source link

PID gains parameters #187

Closed pulver22 closed 8 years ago

pulver22 commented 8 years ago

Hi, I'm working on marker tracking and I'm implementing a PID controller to allow the drone to reach a desire position in front of the detected marker.

My problem is that I don't know how to choose the proper K_p, K_i and K_d parameters. Did some of you implemented a position controller and can tell me which parameters he used or how to choose the best ones?

Thank you!

kbogert commented 8 years ago

PID control with the drones is not trivial, but there are a couple of implementations I'm aware of. For horizontal movement you need to convert a desired velocity into roll and pitch commands. Vertical and yaw movement is instead given as a velocity directly.

You can see this issue in play in the tum_simulator controller, which uses a PID to first convert velocity into roll and pitch then uses another PID to control roll and pitch of the drone.

The code is at: https://github.com/tum-vision/tum_simulator/blob/master/cvg_sim_gazebo_plugins/src/quadrotor_simple_controller.cpp#L318 with parameters: https://github.com/tum-vision/tum_simulator/blob/master/cvg_sim_gazebo_plugins/urdf/quadrotor_simple_controller.urdf.xacro

ardone_autonomy's cmd_vel then takes the output roll and pitch in linear.x and linear.y, see: https://github.com/AutonomyLab/ardrone_autonomy/issues/116

This simplest PID I've seen is in tum_ardrone, which ignores this problem completely and suffers for it: https://github.com/tum-vision/tum_ardrone/blob/hydro-devel/src/autopilot/DroneController.cpp#L129 with parameters: https://github.com/tum-vision/tum_ardrone/blob/hydro-devel/cfg/AutopilotParams.cfg

A more advanced way of dealing with this issue is to explicitly model the connection between roll and pitch and the acceleration of the drone. My implementation is based on a damped spring, where we attach a virtual spring between the drone and its target position, then roll, pitch, yaw, and vertical velocity are chosen to mimic the expected linear movement as close as possible. https://github.com/thinclab/uga_tum_ardrone/blob/indigo-devel/src/autopilot/DroneController.cpp#L148 with parameters: https://github.com/thinclab/uga_tum_ardrone/blob/indigo-devel/cfg/AutopilotParams.cfg

Unfortunately, all of these will require tuning for your specific situation, especially if you are only using the accelerators and gyros on board for state estimation.

pulver22 commented 8 years ago

Hi,

first of all thank you for the really detailed answer. Actually I implemented a really simple controller that I attach:

while ( nh.ok() ) {
                geometry_msgs::TransformStamped transformStamped;

                try {
                        transformStamped = tfBuffer.lookupTransform ( "ar_marker","ardrone_base_frontcam",ros::Time ( 0 ) );
                } catch ( tf2::TransformException &ex ) {
                        ROS_WARN ( "%s",ex.what() );
                        ros::Duration ( 1.0 ).sleep();
                        continue;
                }

                // Save the translation and rotational offsets on three axis
                float linear_offset_X, linear_offset_Y, linear_offset_Z;
                float rotational_offset_Z;
                linear_offset_X = transformStamped.transform.translation.x;
                linear_offset_Y = transformStamped.transform.translation.y;
                linear_offset_Z = transformStamped.transform.translation.z;
                rotational_offset_Z = transformStamped.transform.rotation.z;

                // Adjust distance on the global X axis - z for marker and camera
                // Adjust distance on the global y axis - x for marker and camera
                // Adjust distance on the global z axis - y for marker and camera

                // Create the twist message
                geometry_msgs::Twist vel_msg;

                // PID gains
                double kp = 0.001;
                double ki = 0.001;
                double kd = 0.001;

                // Errors
                double linear_error_x = linear_offset_Z;
                double linear_error_y = linear_offset_X;
                double linear_error_z = linear_offset_Y;
                double rotational_error_z = rotational_offset_Z;

                //Time [s]
                static int64 last_t = 0.0;
                double dt = ( cv::getTickCount() - last_t ) / cv::getTickFrequency();
                last_t = cv::getTickCount();

                // Integral terms
                static double linear_integral_x = 0.0, linear_integral_y = 0.0, linear_integral_z = 0.0, rotational_integral_z = 0.0;
                if ( dt > 0.1 ) {
                        //Reset
                        linear_integral_x = 0.0;
                        linear_integral_y = 0.0;
                        linear_integral_z = 0.0;
                        rotational_integral_z = 0.0;
                }
                linear_integral_x += linear_error_x * dt;
                linear_integral_y += linear_error_y * dt;
                linear_integral_z += linear_error_z * dt;
                rotational_integral_z += rotational_error_z * dt;

                // Derivative terms
                static double linear_previous_error_x = 0.0, linear_previous_error_y = 0.0, linear_previous_error_z = 0.0, rotational_previous_error_z = 0.0;
                if ( dt > 0.1 ) {
                        // Reset
                        linear_previous_error_x = 0.0;
                        linear_previous_error_y = 0.0;
                        linear_previous_error_z = 0.0;
                        rotational_previous_error_z = 0.0;
                }
                double linear_derivative_x = ( linear_error_x - linear_previous_error_x ) / dt;
                double linear_derivative_y = ( linear_error_y - linear_previous_error_y ) / dt;
                double linear_derivative_z = ( linear_error_z - linear_previous_error_z ) / dt;
                double rotational_derivative_z = ( rotational_error_z - rotational_previous_error_z ) / dt;
                linear_previous_error_x = linear_error_x;
                linear_previous_error_y = linear_error_y;
                linear_previous_error_z = linear_error_z;
                rotational_previous_error_z = rotational_error_z;

                cout << "error_x : " << linear_error_x <<", error_y : " << linear_error_y << ", error_z : " << linear_error_z << ", rotational_error : " << rotational_error_z << endl;
                if ( linear_error_x > 0.6 || linear_error_y > 0.1 || linear_error_y < -0.1 || rotational_error_z > 0.1 || rotational_error_z < - 0.1 || linear_error_z < -0.2 ) {
                        kp = atof(argv[1]);
            ki = atof(argv[2]);
            kd = atof(argv[3]);

                }

                vel_msg.linear.x = kp * linear_error_x + ki * linear_integral_x + kd * linear_derivative_x;
                vel_msg.linear.y = kp * linear_error_y + ki * linear_integral_y + kd * linear_derivative_y;
                vel_msg.linear.z = - ( kp * linear_error_z + ki * linear_integral_z + kd * linear_derivative_z );
                vel_msg.angular.z = - ( kp * rotational_error_z + ki * rotational_integral_z + kd * rotational_derivative_z );

                vel_pub.publish ( vel_msg );

                vel_msg.linear.x = vel_msg.linear.y = 0;
                vel_msg.angular.z = 0.0;

                rate.sleep();
        }

For tuning the parameter how can I understand if the system is enough stable or should I change them accordingly? Actually I'm doing trial and error using tum_simulator and look how the drone behaves but it takes time and it's not a scientific approach.

Can you tell me which variables/infos should I look at or plot in order to understand better?

kbogert commented 8 years ago

1) tum_simulator doesn't correctly simulate ardrone_autonomy, namely it assumes cmd_vel is an actual velocity command. You can use my changes to get a somewhat more accurate simulation of the drone: https://github.com/kbogert/tum_simulator/commit/00dba26f916ff5b6ad3d0f174d5eb96012f83320

2) If you're just using the distance error as input into the PID controller the parameters from tum_ardrone should be your starting point. From there logging the navdata and cmd_vel topics will give you a view into how the drone is acting. The drone's control is highly non-linear and thus a PID controller used in this way will be a poor approximation, keep your parameters small and you will get a slow moving but somewhat usable drone. The big problem with this approach is the tradeoff in controlling drift near the goal (due to small error and small parameters) vs the overshoot that occurs with larger parameters.

pulver22 commented 8 years ago

I implemented your changes but know is pratically impossible to pilot the drone inside the simulator. As soon as I publish a message on /cmd_vel it starts drifting in one direction (depending on the message sent) and is really difficult to stop it.

I don't think this is the expected behaviour but I'll wait for your reply.

kbogert commented 8 years ago

This sounds correct. You are now controlling the roll and pitch of the drone and not the velocity. These determine acceleration, and in the absence of friction the drone will continue forever. You can see this with the real drone if you give it a large command followed by a not exactly zero one (all zeros is the indication for the on-board controller to hover). To stop you will need a correctly implemented controller, or else give the exact negative of the commands you sent spaced over equivalent time (which will not work in the real world).

pulver22 commented 8 years ago

Ok perfect, I understood!

I'll try to test the simple PID I implemented after your changes to see how robust it is.

You said before this position controller is a poor approximation, do you think so because it is based only on the distance offsets or are there other reasons?

kbogert commented 8 years ago

A PID controller based on distance error is interpretable as velocity control, and a linear one at that. Using the output as acceleration is incorrect as when approaching the goal the command needs to go negative to slow the object down, it is after all, the derivative of velocity. This can't happen with your PID, it will continue accelerating forward slightly until it reaches the goal. Then the error sign reverses causing it to accelerate back. So long as you use small parameters, the problem is masked since air friction will stop the drone eventually.

pulver22 commented 8 years ago

Thank you for explaining me. My background is not in control theory and I just started looking at these topics.

Since I want to align the drone with a marker, what do you suggest I should do?

kbogert commented 8 years ago

Use multiple PIDs: the first to generate a velocity command from the distance error, the second to generate pitch and roll commands from the velocity error.

Alternatively, install uga_tum_ardrone and give it goal positions based on where you see the marker.

Good luck.

pulver22 commented 8 years ago

@kbogert how can I use uga_tum_ardrone with the simulator?

kbogert commented 8 years ago

Gazebo provides the same interfaces as ardrone_autonomy. At most you will need to remap some nodes.

If you want to use the state estimation features, drop some objects into the simulation before hand.

pulver22 commented 8 years ago

@kbogert I decided to give a try to your work but I don't understand how can I send a target position to the autopilot.

Actually what I'm trying to do is getting the tfTransform between the marker and the camera frame and then send it invoking setTarget() from an instance of DroneController. But it's not working, the drone remains always in the same positions.

Can you help to understand the code?

kbogert commented 8 years ago

I would say the easiest method would be to publish commands to /uga_tum_ardrone/com . https://github.com/thinclab/uga_tum_ardrone#using-it-1

This package never published TF messages, you can't rely on that for the current position (and this will drift over time if you're not using the state estimation feature). If you send moveByRel commands based on where you currently see the maker you should be alright. You'll probably want to send a clearCommands command first.

Be sure you're using this command interface to takeoff and land the drone rather than using ardrone_autonomy directly. And make sure you set the IP address parameter in the drone_autopilot node.

pulver22 commented 8 years ago

Hi @kbogert, I tried to modify the code but it is still not working. Actually what I'm trying to do is just detect the marker and land.

The actual code is the following:

        ros::init ( argc, argv,"pose_subscriber" );
        ROS_INFO ( "TF subscriber correctly running..." );
        ros::NodeHandle nh;
        tf2_ros::Buffer tfBuffer;
        tf2_ros::TransformListener tfListener ( tfBuffer );
        ros::Rate rate ( 2.0 );
        ros::Publisher cmd_pub = nh.advertise<std_msgs::String> ( "/uga_tum_ardrone/com",10 );
        int count = 1;

        while ( nh.ok() ) {
                geometry_msgs::TransformStamped transformStamped;
                geometry_msgs::Vector3 errors_msg;
                try {
                        transformStamped = tfBuffer.lookupTransform ( "ar_marker","ardrone_base_frontcam",ros::Time ( 0 ) );
                } catch ( tf2::TransformException &ex ) {
                        ROS_WARN ( "%s",ex.what() );
                        ros::Duration ( 1.0 ).sleep();
                        continue;
                }

                std_msgs::String command;
                if ( count ==  1 ) {
                        command.data = "c clearCommands";
                        cmd_pub.publish<> ( command );
                        count ++;
                }

                command.data = "c land";
                cmd_pub.publish<> ( command );

                rate.sleep();
        }

        ros::spin();
        return 0;

Unfortunately it is not working. I checked the topic and the messages are published over it but for some reason the drone doesn't move.

Do you see something strange in my code?

Thank you for your patience!

EDIT: I tried to send the autoinit command followed by the land one while the robot was already flying and it seemed to work! I'll make more tests!

kbogert commented 8 years ago

Well, there's no move command for one. There's no takeoff command for two.

Please get some basic maneuvers working before asking for help. Trying a simple takeoff -> move 1 -> land would be a good idea, then you can work on your marker landing code.

pulver22 commented 8 years ago

For moving I meant taking off/landing depending from the string message sent! Anyway I noticed that sending the autoinit first I'm able to perform these movements! I'll continue with more complex manoeuvres and I'll let you know!

pulver22 commented 8 years ago

@kbogert I'm doing more tests with the following code:

while ( nh.ok() ) {
                geometry_msgs::TransformStamped transformStamped;
                geometry_msgs::Vector3 errors_msg;
                try {
                        transformStamped = tfBuffer.lookupTransform ( "ar_marker","ardrone_base_frontcam",ros::Time ( 0 ) );

                } catch ( tf2::TransformException &ex ) {
                        ROS_WARN ( "%s",ex.what() );
                        ros::Duration ( 1.0 ).sleep();
                        continue;
                }

                // Save the translation and rotational offsets on three axis
                float linear_offset_X, linear_offset_Y, linear_offset_Z;
                float rotational_offset_Z;

                linear_offset_X = linear_offset_Y = linear_offset_Z = rotational_offset_Z = 0;
        linear_offset_X = - transformStamped.transform.translation.x;
        linear_offset_Y =  transformStamped.transform.translation.z;    

        clear.data = "c clearCommands";

                autoinit.data = "c autoInit 500 800";
                reference.data = "setReference $POSE$";
                maxControl.data = "setMaxControl 1";
                initialReachDist.data = "setInitialReachDist 0.1";
                stayWithinDist.data = "setStayWithinDist 0.2";
                stayTime.data = "setStayTime 2";

        moveBy.data = "c moveByRel " + boost::lexical_cast<std::string>(linear_offset_X) + " " + boost::lexical_cast<std::string>(linear_offset_Y) + " " + 
            boost::lexical_cast<std::string>(linear_offset_Z) + " " + boost::lexical_cast<std::string>(rotational_offset_Z) ;

                if ( count ==  1 ) {

                        cmd_pub.publish<> ( clear );
                        cmd_pub.publish<> ( autoinit );
                        cmd_pub.publish<> ( reference );
            cmd_pub.publish<> ( maxControl );
                        cmd_pub.publish<> ( initialReachDist );
                        cmd_pub.publish<> ( stayWithinDist );
                        cmd_pub.publish<> ( stayTime );                        
                        cmd_pub.publish<> ( moveBy );
            ros::Duration ( 1.0 ).sleep();
                        count ++;

                }

                if ( abs(linear_offset_X) < 0.3 && abs(linear_offset_Y) < 0.6 && abs(linear_offset_Z) < 0.3 ) {
                        ROS_INFO ( "Destination reached" );
                        count ++;
            ros::shutdown();
                }

                rate.sleep();
        }

Ignoring for a while the last if construct, the drone actually moves in the right direction (on x and y ) depending on the offset retrieved by tf. What I noticed is that these offset don't seem corresponding to the ground truth data.

A cause can be I defined in the wrong way the dimension of the marker in the gazebo files.

But I was also considering if there is a way to continuosly have the tf feeback in the controller loop in such a way the drone will continue to move until the offsets on three axis is reduced. Is it possible to automatize the procedure to send a new goal on /uga_tum_ardrone/com before the drone actually reach the previously assigned one? or a way to stop the ros cicle until it reach the previous goal and then start a new one?

Actually I modified the code in this way but I have the impression that, even if it seems working, it's not the good way:

if ( ! ( abs ( linear_offset_X ) < 0.1 && abs ( linear_offset_Y ) < 0.6 && abs ( linear_offset_Z ) < 0.3 ) ) {

                        cmd_pub.publish<> ( clear );
                        cmd_pub.publish<> ( autoinit );
                        cmd_pub.publish<> ( reference );
                        cmd_pub.publish<> ( maxControl );
                        cmd_pub.publish<> ( initialReachDist );
                        cmd_pub.publish<> ( stayWithinDist );
                        cmd_pub.publish<> ( stayTime );
                        drone takes off also using only the autoInit)

                        cmd_pub.publish<> ( moveBy );
                        ros::Duration ( 1.0 ).sleep();

                } else {
                        ROS_INFO ( "Destination reached" );
                        ros::shutdown();
                }

Hope you understand!

kbogert commented 8 years ago

I would say at this point if you have got control of the drone using the uga_tum_ardrone package then this issue should be closed. For uga_tum_ardrone specific questions please open an issue with that package to provide better documentation and so that we avoid spamming ardrone_autonomy.

mprannoy commented 8 years ago

A PID controller based on distance error is interpretable as velocity control, and a linear one at that. Using the output as acceleration is incorrect as when approaching the goal the command needs to go negative to slow the object down, it is after all, the derivative of velocity. This can't happen with your PID, it will continue accelerating forward slightly until it reaches the goal. Then the error sign reverses causing it to accelerate back. So long as you use small parameters, the problem is masked since air friction will stop the drone eventually.

@kbogert , isn't 'D' part of a PID based only on distance supposed to give the necessary deceleration to make the velocity 0 at the setpoint?

kbogert commented 8 years ago

No the derivative of distance is velocity, not acceleration. A PID controller based on distance error is going to output zero at the setpoint which will make the drone angle flat, ie it stops accelerating. It will not output a negative value until after it passes the goal and the error becomes negative.

mprannoy commented 8 years ago

Because the distance(error) is decreasing, it's derivative would be negative. This negative term multiplied with an appropriate gain will provide an overall negative acceleration, which slowly goes to 0, eventually making the vehicle stop at the setpoint.

kbogert commented 8 years ago

There is no acceleration here. The PID is outputting velocity, which is not acceleration. Multiplying D by a large number will cause excessive velocity and overshoot in an ideal system

But again, the drone takes acceleration commands, not velocity. You need another PID controller to take the velocity error (velocity output by the first PID minus current actual velocity) and output acceleration. Then when the first PID outputs zero at the setpoint and the drone is still moving the second PID can turn this into a negative acceleration command.

mprannoy commented 8 years ago

You know that the AR Drone commands that we give are tilt commands, which are effectively acceleration. So, what I'm saying is, why not use: vel_msg.linear.x = kp * linear_error_x + ki * linear_integral_error_x + kd * linear_derivative_error_x

Also, have a look at https://www.youtube.com/watch?v=9jrswHAee_E, if you're still not clear.