tum-vision / lsd_slam

LSD-SLAM
GNU General Public License v3.0
2.6k stars 1.23k forks source link

Scene reconstructed pointcloud and increasing FPS #35

Closed rbarthseas closed 9 years ago

rbarthseas commented 9 years ago

Hey,

I am reconstructing a simple scene in a pointcloud using the lsd_slam library. I have a few questions before I start doing redundant or stupid things. :)

My main goal is to create and update a combined pointcloud of the scene that is published in ROS for other nodes to use, for example RVIZ for visualization or other nodes to do computations on.

Whilst running the lsd_slam_core on a live image stream ($ rosrun lsd_slam_core live_slam image:=/image_raw camera_info:=/camera_info), I can vizualize such a reconstructed scene by running the lsd_slam_viewer (rosrun lsd_slam_viewer viewer).

I also can vizualize the pointcloud in RVIZ by listening to the /lsd_slam/keyframes topic and extract and republish a pointcloud from that data. However, this is just from a single frame and is not a combined scence reconstruction using also the previous frames. Would I need to write my own combination algortihm? Or is the whole reconstructed pointcloud published somewhere like in the displayed in the lsd_slam_viewer?

I also have a question about the performance. When running the lsd_slam_core ($ rosrun lsd_slam_core live_slam image:=/image_raw camera_info:=/camera_info), it looks like the framerate is quite low due to the vizualization in the 'DebugWindow Depth' window. Is there a parameter that turns off that display? I could not find it in the dynamic reconfigure parameters.

Thanks for the help! Ruud

rbarthseas commented 9 years ago

I digged a little deeper and it looks like the keyframeGraphMsg is filled with a collection of keyframes, displayed by the lsd_slam_viewer:

uint32 numFrames uint8[] frameData

Not sure yet how to extract the collection of pointclouds from there into the one pointcloud that is displayed in the lsd_slam_viewer...

JakobEngel commented 9 years ago

there are two types of messages passed from the slam system to the viewer:

I'm not sure if RVIZ can be used to display this message format correctly, but I don't think so - in particular, because it doesn't know sim(3) poses.

I think you can disable the depth map display by setting displayDepthMap to false in settings.cpp, however you will also lose the key press option - i.e. have no way to e.g. reset the system.

rbarthseas commented 9 years ago

Thank you for your swift reply Jakob.

Okay, so the combined image reconstruction pointcloud, as displayed in the lsd_slam_viewer, is not available or published in ROS.

I guess I have to write 1) an algorithm to convert the keyframe information into a pointcloud usable in ROS (already done!) and 2) to combine the keyframes in such a way that consequetive keyframes reconstruct an scence by combining the incoming pointclouds.

For the second point I could either use the published pose by the lsd_slam nodes or use the pose that is known when my camera image is triggered.

This does raise another question: are keyframes published after each new incoming image, or are they published after a pre-determined number of images?

I do spot a dynamic parameter that might encode this, however, I am unsure how often correlates with the parameter's value: "KFUsageWeight: [double] Determines how often keyframes are taken, depending on the overlap to the current keyframe. Larger -> more keyframes"

Best, Ruud

mhkabir commented 9 years ago

@rbarthseas could you please share the code you're using to get pointcloud from the key frame messages in RVIZ?

I think I can help with the concatenation system.

rbarthseas commented 9 years ago

Hi mhkabir,

I found a way to easily concatenate the pointclouds. I use PCL and the /lsd_slam/pose to transform them to the same frame.

To get the pointcloud from the keyframe, I use this code, based on code from the lsd_slam source.

// Callback for when lsd_slam published a pointcloud on lsd_slam/keyframe topic. void pointcloudCallback(const slam::keyframeMsg::ConstPtr& raw_point_cloud_message) {

    originalInput = new InputPointDense[raw_point_cloud_message->width*raw_point_cloud_message->height];
    memcpy(originalInput, raw_point_cloud_message->pointcloud.data(), raw_point_cloud_message->width*raw_point_cloud_message->height*sizeof(InputPointDense));       

    // If there are no points then we're done.
    if(originalInput == 0)
    return;

    // The raw pointcloud processed into this processed pointcloud.
    sensor_msgs::PointCloud processed_pointcloud;
    //processed_pointcloud.header.stamp      = robot_endpointstate_pose.header.stamp;
    processed_pointcloud.header.stamp.sec    = raw_point_cloud_message->time;

    // Get pose from received lsd-slam pointcloud relative to starting pose of lsd-slam.
    memcpy(camToWorld.data(), raw_point_cloud_message->camToWorld.data(), 7*sizeof(float));

    // Get camera parameters from lsd-slam node. Needed to compute 3D coordinates.
    double fx = raw_point_cloud_message->fx;
    double fy = raw_point_cloud_message->fy;
    double cx = raw_point_cloud_message->cx;
    double cy = raw_point_cloud_message->cy;

    double fxi = 1/fx;
    double fyi = 1/fy;
    double cxi = -cx / fx;
    double cyi = -cy / fy;

    sensor_msgs::ChannelFloat32 channel_red;
    sensor_msgs::ChannelFloat32 channel_green;  
    sensor_msgs::ChannelFloat32 channel_blue;

    channel_red.name   = "r";
    channel_green.name = "g";
    channel_blue.name  = "b";

    // For each point in the recieved lsd-slam pointcloud message,
    // calculate its 3D real world coordinate. Formerly they were image coordinates.
    for(int y=1;y<raw_point_cloud_message->height-1;y++)
    {
        for(int x=1;x<raw_point_cloud_message->width-1;x++)
        {
            // Skip all points with zero depths.
            if(originalInput[x+y*raw_point_cloud_message->width].idepth <= 0)
            {
                continue;
            } 

            // Skip adding this point with a random chance.
            if(sparsity_factor > 1 && rand()%sparsity_factor != 0)
            {
                continue;
            } 

            // Calculate the depth of the point.
            float depth = 1 / originalInput[x+y*raw_point_cloud_message->width].idepth;

                // Create 3D point, with lsd-slam pose transformation.
                Sophus::Vector3f transformed_point = camToWorld * (Sophus::Vector3f((x*fxi + cxi), (y*fyi + cyi), 1) * depth);   
                point.x = transformed_point[0];
                point.y = transformed_point[1];
                point.z = transformed_point[2];

            // Add point to pointcloud.
            processed_pointcloud.points.push_back(point);

            // Add color to point.
            channel_red.values.push_back(255);
            channel_green.values.push_back(255);
            channel_blue.values.push_back(0);
        }
    }

    // Add color channels to pointcloud.
    processed_pointcloud.channels.push_back(channel_red);
    processed_pointcloud.channels.push_back(channel_green);
    processed_pointcloud.channels.push_back(channel_blue);

    publisher.publish(processed_pointcloud)

}

rbarthseas commented 9 years ago

Although I do have a problem with the concatenation. I use pcl::concatenatePointCloud for this. However, it gives me weird projections when I transform the ROS pointcloud2 to PCL pointcloud and back again.

JacoPisa commented 9 years ago

I'm tryng to visualize the lsd pointcloud in Rviz.

did someone reach this goal with good results?

is better to modify the code of lsd for publish in the pointcloud(Rviz) format or implement a node that can convert it?

thanks in advance =)

JakobEngel commented 9 years ago

I just added a FAQ on that to the readme, please see here: https://github.com/tum-vision/lsd_slam#6-troubleshoot--faq

tanmayshankar commented 9 years ago

@rbarthseas
I just wanted to know if you had managed to implement publishing the pointcloud! I think the weirdness in the transformations might have something to do with the fact that frame orientations in ROS and PCL are different, i.e. the z axis are along different directions for a camera driver in ROS and a pointcloud in PCL. Let us know, thanks a lot!

tanmayshankar commented 9 years ago

@rbarthseas Also, where are you adding your code snippet to the package? Or are you implementing this as a separate node?

rbarthseas commented 9 years ago

@tanmayshankar

Yes I got it to publish with the code above, implemented in a separate ROS node.

tanmayshankar commented 9 years ago

@rbarthseas

Hello, I added your code as a separate node, subscribed to and published the corresponding topics, and included header files from lsd_slam_viewer to incorporate the keyframeMsg type.

It pops up with an error telling me that 'slam' (from the function call) is not a type. Also, when I remove the slam namespace and just use keyframeMsg, it still pops up as an error, saying keyframeMsg is not a type.

Any ideas as to where I'm going wrong? Thanks again for the help.

tanmayshankar commented 9 years ago

Apparently that's not the only error - since I'm implementing it as a separate node, I have to define a lot of terms, like originalInput, etc. and include corresponding headers. Do you think you could share your source for that separate node?

rbrth commented 9 years ago

@tanmayshankar

Sure! You can find the whole package code here: https://github.com/rbrth/framework/tree/master/slam_bridge

Sorry I did not thought about sending the link in my previous comment. Enjoy.

boonflies commented 8 years ago

I tried downloading and building the package, https://github.com/rbrth/framework/tree/master/slam_bridge

But it seems to be dependent on the other package, 'image_acquisition' available here, https://github.com/rbrth/framework/tree/master/

When I build, the 'image_acquisition' seems to be missing files like 'HalconCpp.h'.

So, is the 'image_acquisition' package also needed along with 'slam_bridge'?

Ruudth commented 8 years ago

@boonflies

The image_acquisition package should not be dependent on the slam_bridge package. Could you try compiling the package separately with:

$ catkin_make --pkg slam_bridge

Ruudth commented 8 years ago

@boonflies

I see the package does #include <image_acquisition/request_camera_parameters.h>, and therefore searches for this service created by the image_acquisition package.

What you could do is remove this dependency in the code by removing the associated lines, thereafter adding your own way of setting the camera parameters (or even hard-code them temporarily to test).

boonflies commented 8 years ago

I was able to build slam_bridge. I am using parrot ardrone 2.0 camera for slam. What should be the camera parameters I should pass for, f, k1, k2, k3, P1, P2, Sx, Sy, Cx, Cy, Iw, Ih, fxi, fyi, cxi, cyi in lsd_slam_pointcloud_publisher.cpp.

My cfg file is as follows, 0.50355332 0.894045767 0.494061013 0.509863 0 640 360 crop 400 288

Ruudth commented 8 years ago

@boonflies

Here is a list of which parameter is what. K's and P's are distortion parameters is I remember correctly, but they are not used in the slam_bridge.

Did you calculate the camera parameter in your config file?

Here you can see which parameter is which:

https://github.com/rbrth/framework/tree/master/slam_bridge/calibration_files

boonflies commented 8 years ago

I did not calculate the camera parameters. I just used the cfg parameters mentioned by "Annyv2" in the post, of lsd_slam/issues/122. It seemed to work pretty good. Do you suggest that I calculate the camera parameters for my drone. I don't have the checkerboard currently which is needed to calibrate it (I hope I am right).

I updated the lsd_slam_pointcloud_publisher.cpp as,

bool SlamBridge::requestCameraParameters() {
Iw = 400; Ih = 360; fxi = 0.50355332;
fyi = 0.894045767; cxi = 0.494061013; cyi = 0.509863;

          camera_parameters_loaded = true;
          return true;

}

and ran the node as, rosrun slam_bridge lsd_slam_pointcloud_publisher

but still I don't find any messages being published in the topic, /slam_pointcloud

Ruudth commented 8 years ago

@boonflies

Yup you should calculate the camera parameters specifically for your camera. A checkerboard is indeed often required. There are multiple ways to calculate them and then hardcode them in your code. Google is your friend.

boonflies commented 8 years ago

what would be the reason that messages are not being published in the topic /slam_pointcloud?

Ruudth commented 8 years ago

@boonflies

I would not know

grotli commented 7 years ago

@boonflies

Did you ever find a solution to the problem of messages not being published? What other changes did you have to do to the code to make it build?