tum-vision / tum_ardrone

Repository for the tum_ardrone ROS package, implementing autonomous flight with PTAM-based visual navigation for the Parrot AR.Drone.
http://wiki.ros.org/tum_ardrone
GNU General Public License v3.0
226 stars 192 forks source link

Getting PTAM point cloud #1

Closed overbit closed 3 years ago

overbit commented 11 years ago

Is it possible to get and set in any way (maybe through a ros topic) information about the keypoint detected and used by PTAM algorithm? This would be very helpful to save and reuse detected features without the need to reinitialize PTAM every time.

JakobEngel commented 11 years ago

there is currently no way to save & load a PTAM map - of course it's possible, but would require quite some digging around in PTAM's code, which I currently don't have the time for.

If you just want to access the 3D positions of PTAM's map as a point cloud (e.g. to detect obstacles or something similar), that should be relatively easy to add.

mmyself commented 11 years ago

Hello Jakob, first off all, thank you for this very exciting work. It make my dreams come true. All i dream for ardrone, is now implemented. WOW. you and mani are the greatest. but enough of this.

Now we can go further. For me the next logical steps are implementing the Pointcloud for ROS. I thinking around how to do this. But not shure where to implement it. I think the main work is written here https://github.com/ethz-asl/ethzasl_ptam/blob/master/ptam/src/System.cc#L410 Can you gave me a hint where you think, or maybe somebody else who read this, i should try to implement this piece of code from ethzasl-ptam?

JakobEngel commented 11 years ago

yep, that's definitely the next thing to do :) - I just dont have time for it at the moment.

It should actually be pretty simple, as the point coordinates are extracted & transformed into metric coordinates to display the map here: https://github.com/tum-vision/tum_ardrone/blob/master/src/stateestimation/PTAMWrapper.cpp#L538 mapPointsTransformed is filled with all map points here, the function you referenced should easily be applicable at that point. The points are, I think, in the same coordinate system as the drone's pose published on /ardrone/predictedPose, i.e. metric world coordinates. maybe this should only be done every once in a while and not for every frame for performance reasons; the pointcloud does not change that frequently...

cheers, jakob

alexiip commented 11 years ago

I have been playing around with what was suggested by @JakobEngel and I got something working. However, I have a hard time interpreting the points that I am getting. I am trying to see how a point is related to where the camera(drone) is. For example, the closest point to the drone should have the largest Z but where is it in the image frame? I tried using TrackerData->v2Image[0] for x, but it sometimes reports something that is larger than the picture frame... Could you suggest a way of getting an x and y coordinates in the image frame?

And again, thanks for the amazing work! This is really a great project!

mmyself commented 11 years ago

Hi alexiip, did you get a working pcl in rviz? my pcl is chrashing rviz and i don't now why. here is my code https://github.com/mmyself/tum_ardrone/blob/pcl_and_footprint_rviz/src/stateestimation/PTAMWrapper.cpp#L552 can you share yours also?

JakobEngel commented 11 years ago

the Points that come from PTAM (i.e. (mpMap->vpPoints)[i]->v3WorldPos) are in PTAM's "world" coordinate frame, NOT in the camera's coordinate frame and NOT in the real world frame (which is scaled differently and slightly offsetted). To get them in the camera's coordinate frame use mpTracker->GetCurrentPose() * (mpMap->vpPoints)[i]->v3WorldPos proj(p) should give you then the respective pixel position, where proj ist the camera's projection function (nonlinear for distorted cameras).

for drawing, I then transform them to the real world frame (scaled and slightly offsetted wrt. PTAM's "world" frame) and store them in PTAMWrapper.mapPointsTransformed. The points there are in the same frame as the computed drone pose.

I don't know what TrackerData->v2Image[0] contains, but if the x,y coordinates are out of bounds, my guess would be that the respective pixel is currently not visible ;)

overbit commented 11 years ago

We have provided publishing of both Point Cloud 2 and Pose Stamped standard ROS messages. We publish a new cloud every newly added KF. Don't know if it is the best way since it seems that PTAM rearranges/adjusts to more precise points a little after the KF has been added to the Map. I apologize for posting the patch here, but we are not working on github

Index: src/stateestimation/EstimationNode.cpp
===================================================================
--- src/stateestimation/EstimationNode.cpp  (revisione 56)
+++ src/stateestimation/EstimationNode.cpp  (revisione 113)
@@ -18,7 +18,10 @@
 #include "ros/ros.h"
 #include "ros/package.h"
 #include "geometry_msgs/Twist.h"
+#include "geometry_msgs/PoseStamped.h"
 #include "sensor_msgs/Image.h"
+#include "sensor_msgs/PointField.h"
+#include "sensor_msgs/PointCloud2.h"
 #include "tf/tfMessage.h"
 #include "tf/transform_listener.h"
 #include "tf/transform_broadcaster.h"
@@ -48,6 +51,8 @@
    navdata_channel = nh_.resolveName("/ardrone/navdata");
    control_channel = nh_.resolveName("/cmd_vel");
    output_channel = nh_.resolveName("/ardrone/predictedPose");
+   std_pose_channel = nh_.resolveName("/ardrone/std_pose");
+   point_cloud_channel = nh_.resolveName("/ardrone/point_cloud");
    video_channel = nh_.resolveName("/ardrone/image_raw");
    command_channel = nh_.resolveName("/tum_ardrone/com");
    packagePath = ros::package::getPath("tum_ardrone");
@@ -79,7 +84,10 @@
    vid_sub          = nh_.subscribe(video_channel,10, &EstimationNode::vidCb, this);

    dronepose_pub      = nh_.advertise<tum_ardrone::filter_state>(output_channel,1);
+   drone_std_pose_pub = nh_.advertise<geometry_msgs::PoseStamped>(std_pose_channel,1);
+   drone_point_cloud_pub = nh_.advertise<sensor_msgs::PointCloud2>(point_cloud_channel,1);

+
    tum_ardrone_pub    = nh_.advertise<std_msgs::String>(command_channel,50);
    tum_ardrone_sub    = nh_.subscribe(command_channel,50, &EstimationNode::comCb, this);

@@ -90,6 +98,8 @@
    currentLogID = 0;
    lastDroneTS = 0;
    lastRosTS = 0;
+   lastKF = 0;
+
    droneRosTSOffset = 0;
    lastNavStamp = ros::Time(0);
    filter = new DroneKalmanFilter(this);
@@ -98,7 +108,6 @@
    arDroneVersion = 0;
    //memset(&lastNavdataReceived,0,sizeof(ardrone_autonomy::Navdata));

-
 }

 EstimationNode::~EstimationNode()
@@ -266,6 +275,7 @@
          // get filter state msg
          pthread_mutex_lock( &filter->filter_CS );
          tum_ardrone::filter_state s = filter->getPoseAt(ros::Time().now() + predTime);
+         tum_ardrone::filter_state t_pose = filter->getCurrentPoseSpeed();
          pthread_mutex_unlock( &filter->filter_CS );

          // fill metadata
@@ -279,7 +289,90 @@
          // publish!
          dronepose_pub.publish(s);

+         geometry_msgs::PoseStamped pose_stmp;
+         pose_stmp.header.stamp = ros::Time().now();
+         pose_stmp.header.frame_id = '1';
+         pose_stmp.pose.position.x = t_pose.x;
+         pose_stmp.pose.position.y = t_pose.y;
+         pose_stmp.pose.position.z = t_pose.z;
+         geometry_msgs::Quaternion q = tf::createQuaternionMsgFromRollPitchYaw(t_pose.roll*3.1415/180,t_pose.pitch*3.1415/180,(-t_pose.yaw+90)*3.1415/180);

+         pose_stmp.pose.orientation = q;
+
+         drone_std_pose_pub.publish(pose_stmp);
+
+
+
+         // POINT CLOUD 2
+
+         int currentKF = ptamWrapper->keyFramesTransformed.size() ;
+
+         if (currentKF > lastKF) {
+             lastKF = currentKF;
+
+             sensor_msgs::PointCloud2 point_cloud;
+
+             pthread_mutex_lock(&ptamWrapper->shallowMapCS);
+
+             std::vector<tvec3> mpl = ptamWrapper->mapPointsTransformed;
+             int dimension = 3;
+
+             point_cloud.width = mpl.size();
+
+             point_cloud.header.stamp = ros::Time().now();
+             point_cloud.header.frame_id = '1';
+             point_cloud.height = 1;
+
+             point_cloud.fields.resize(dimension);
+
+             point_cloud.fields[0].name = "x";
+             point_cloud.fields[0].offset = 0*sizeof(uint32_t);
+             point_cloud.fields[0].datatype = sensor_msgs::PointField::FLOAT32;
+             point_cloud.fields[0].count = 1;
+
+             point_cloud.fields[1].name = "y";
+             point_cloud.fields[1].offset = 1*sizeof(uint32_t);
+             point_cloud.fields[1].datatype = sensor_msgs::PointField::FLOAT32;
+             point_cloud.fields[1].count = 1;
+
+             point_cloud.fields[2].name = "z";
+             point_cloud.fields[2].offset = 2*sizeof(uint32_t);
+             point_cloud.fields[2].datatype = sensor_msgs::PointField::FLOAT32;
+             point_cloud.fields[2].count = 1;
+
+             /*point_cloud.fields[3].name = "rgb";
+             point_cloud.fields[3].offset = 3*sizeof(uint32_t);
+             point_cloud.fields[3].datatype = sensor_msgs::PointField::INT32;
+             point_cloud.fields[3].count = 1;*/
+
+
+
+             point_cloud.point_step = dimension*sizeof(uint32_t);
+             point_cloud.row_step = point_cloud.point_step * point_cloud.width;
+
+             point_cloud.data.resize(point_cloud.row_step * point_cloud.height);
+
+             point_cloud.is_dense = false;
+
+             unsigned char* dat = &(point_cloud.data[0]);
+
+             for(unsigned int i=0 ; i < point_cloud.width; i++)
+             {
+                     float pos[] = {mpl[i][0], mpl[i][1], mpl[i][2]};
+                     memcpy(dat, pos ,3*sizeof(float));
+
+                     //uint32_t colorlvl = 0xffffffff;
+                     //memcpy(dat+3*sizeof(uint32_t),&colorlvl,sizeof(uint32_t));
+
+                 dat+=point_cloud.point_step;
+
+             }
+
+             pthread_mutex_unlock(&ptamWrapper->shallowMapCS);
+
+             drone_point_cloud_pub.publish(point_cloud);
+         }
+
          // --------- if need be: add fake PTAM obs --------
          // if PTAM updates hang (no video or e.g. init), filter is never permanently rolled forward -> queues get too big.
          // dont allow this to happen by faking a ptam observation if queue gets too big (500ms = 100 observations)
Index: src/stateestimation/EstimationNode.h
===================================================================
--- src/stateestimation/EstimationNode.h    (revisione 56)
+++ src/stateestimation/EstimationNode.h    (revisione 113)
@@ -54,6 +54,8 @@

    // output
    ros::Publisher dronepose_pub;
+   ros::Publisher drone_std_pose_pub;
+   ros::Publisher drone_point_cloud_pub;

    ros::NodeHandle nh_;

@@ -68,6 +70,8 @@
    std::string navdata_channel;
    std::string control_channel;
    std::string output_channel;
+   std::string std_pose_channel;
+   std::string point_cloud_channel;
    std::string video_channel;
    std::string command_channel;

@@ -77,7 +81,9 @@
    long lastRosTS;
    long droneRosTSOffset;

+   int lastKF;

+
    // save last navinfo received for forwarding...
    ardrone_autonomy::Navdata lastNavdataReceived;
alexiip commented 11 years ago

Jakob,

Thank you so much for your response. I really appreciate it.

In your experience with writing a wrapper, have you seen the proj function for the drone's camera? Before scavenging around for all of the cameras intrinsic parameters and having to write that projection function... do you know if that information is already there? (it must be for PTAM to work...)

With your help I can see how I can transform the information into camera's coordinate frame... now I just need the x and y out of it and it does not seem very trivial to get that from what I can see.

Again great package and really, thank you so much for your time and help,

-Alexander

On Thu, Mar 14, 2013 at 3:42 AM, JakobEngel notifications@github.comwrote:

the Points that come from PTAM (i.e. (mpMap->vpPoints)[i]->v3WorldPos) are in PTAM's "world" coordinate frame, NOT in the camera's coordinate frame and NOT in the real world frame (which is scaled differently and slightly offsetted). To get them in the camera's coordinate frame use mpTracker->GetCurrentPose() * (mpMap->vpPoints)[i]->v3WorldPos proj(p) should give you then the respective pixel position, where proj ist the camera's projection function (nonlinear for distorted cameras).

for drawing, I then transform them to the real world frame (scaled and slightly offsetted wrt. PTAM's "world" frame) and store them in PTAMWrapper.mapPointsTransformed. The points there are in the same frame as the computed drone pose.

I don't know what TrackerData->v2Image[0] contains, but if the x,y coordinates are out of bounds, my guess would be that the respective pixel is currently not visible ;)

— Reply to this email directly or view it on GitHubhttps://github.com/tum-vision/tum_ardrone/issues/1#issuecomment-14891692 .

23pointsNorth commented 10 years ago

@overbit Is the suggested patch in working condition?

dgitz commented 10 years ago

I can confirm that this patch does publish the pcl data, but the transform is currently not working quite right. See my forked tum_drone repo for more info: https://github.com/dgitz/tum_ardrone

lesire commented 10 years ago

Hello. Could you propose a pull request for that functionality based on hydro-devel?

dgitz commented 10 years ago

Hi, sorry we are only doing dev work in fuerte. I did create a pull request though.

rohanbhargava11 commented 10 years ago

For a pull request on hydro-devel the code needs to be catknized as well. I can start working on that. Once the code is catknized then we can test it on hydro. It will take some time but I hope to get the work done as soon as possible.

lesire commented 10 years ago

@rohanbhargava11 you should wait till I review the pull request for fuerte at least, so that I can go on from a clean patch... I'll do that by tomorrow

oscarmendezm commented 9 years ago

Has anyone figured out how to get the TF to work properly?

jb-91 commented 7 years ago

@dgitz this patch allows to extract point cloud from the PTAM system. What about the inverse process? I need to load it in the PTAM algorithm.

batuhan007 commented 6 years ago

@JakobEngel Hi Jakob , you said that it is easy to get PTAM's map 3d points as a point cloud . And in your PTAMWrapper.cpp code https://github.com/tum-vision/tum_ardrone/blob/master/src/stateestimation/PTAMWrapper.cpp#L560 the 'pointcloud.txt' file , I can't create this file , what did I wrong , can you help for that ? Simply ı just want to extract map points . What should I do ? This is my thesis most crucial section...

vidhyagithub commented 6 years ago

Any idea how to extract point cloud from Android PTAM system? https://github.com/ICGJKU/APTAM-GPL