johannes-graeter / limo

Lidar-Monocular Visual Odometry
GNU General Public License v3.0
822 stars 232 forks source link

Failed to implement the LIMO to my own dataset #28

Closed Claud1234 closed 5 years ago

Claud1234 commented 5 years ago

I want to implement the LIMO to my own dataset. I have images and velodyne as well, refer to your template bag file, I record my bag file as following topic: image As you said the semantic label not compulsory in another issues, I disabled it as well.

Since I do not have gray scale image, so i use 'image_proc' convert my color image to the gray scale.

The problem is about the /tf and /tf_static, I am not really understand how you arrange you /tf and /tf_static topic, but i noticed you said It need to transform the velodyne to camera. My dataset is from a vehicle which contain the Lidar, camera and GPS. It has three frame id and child frame id combinations: 1. agv_ndt/odom and agv_ndt/base_link 2. agv_ndt/map and agv_ndt/odom 3. world and agv_ndt/map.

Since I am not sure how they make this dataset and what these frame id means. So i just convert the first combination to the '/sensor/velodyne and /sensor/camera' (you see it has 2916 mesage, but velodyne and images also around 600, is this affect?).

I also change the frame id name in topic '/sensor/camera/grayscale/left/image_rect' to '/sensor/camera' and '/sensor/velodyne/cloud_euclidean' to '/sensor/velodyne, so they are consistent with the /tf topic.

I did not change the frame id in for color image, because I noticed color image topic was not subscribe by the LIMO while it's running. I also did not change id name in /tf_static, because I define the '/sensor/velodyne' as header frame and '/sensor/camera' as child frame directly in /tf.

After all of these processing to my own bag file. It still impossible to implement it to LIMO. Terminal is looks like this:

image image

For your template kitti bag, after got the tf transformation, it will find out the image, show image information, then start calculation. For my bag file, it can not find the images at all.

What you think the possible reason of this? I appreciate any advice worth to try.

johannes-graeter commented 5 years ago

Hi there,

thank you for your work to make limo spin on your platform! The assertion suggests, that your camera camera model is not symmetric (focal length in x == focal length in y), which it extracts from the intrinsic matrix of your camera. Did you do an undistort of your images to a pinhole camera model? If so could you compile a different model, that has the same focal length in x and y direction? It could also be changed in code, but I think it is cleaner to undistort your images to a pinhole model, that is symmetric.

Regards,

Johannes

Claud1234 commented 5 years ago

Hi:

I did rectification of my gray scale image as you suggested, I also changed the values in camera_info to the new K and P matrices, now K and P have the same fx and fy.

the problem is there is still no following log message after get the transformation in the terminal, exact same as before (second terminal screenshot I post last time).

Now i am thinking about topic /tf and /tf_static. I noticed in your bag file, /tf is /local_cs(ground truth) to /sensor/camera. /tf_static is /sensor/camera to /sensor/velodyne and /sensor/camera to four images(gay left, gray right, color left, color right).

Q1: Since Limo only called the gray left, can i say the transforms in /tf_static betwen '/sensor/camera' to other three images are not used during the calculation of Limo?

Q2: Is this /tf_static important for Limo, because In my own bag file, I set my /tf is transform between /sensor/velodyne and camera directly and my /tf_static is something not related. Is this the reason I get nothing after get the transformation(like the second terminal screenshots i post last time). Must I set my /tf and /tf_static as your bag files?

Really thanks of any help

Regards

Claud1234 commented 5 years ago

Hi:

Good new is you can forget my previous comment, because i have made the limo running for my own data, even though there is no calculation from limo(I will explain later), but at least the limo can 'running' without errors.

Regardless the performance or calculate speed, etc. I found there is only need four topics that can make limo working.(I tried with the template 04.bag file you provided) . image

I set my own bag file like this as well, because i do not need handle the complex /tf in this way, just need give the absolute location and orientation between the lidar and camera in /tf_static.

After these operations, the limo capable to 'running', but nothing log out, everything are 0!!! image

Another issue is there is always a long delays between the calculation stuffs which i showed in above picture image

In your template bag file, this usually only appear once a time then start new calculation.

These are the new problems I caught right now. For the second delay problem, if it only about the optimization speed but not affect calculation, let's just ignore it right now.

The key problem is why all log results are zero?

Here are the situation of my bag file.

At first, when running my bag file, topic /tf_static, /image, /velodyne are not start at the same moment in bag file, each of them has a 0.5s delay to each other.(/tf_static start around 0.5s, /image around 0.9s and /velodyne around 1.4s).

Second, the /image and /velodyne are not synchronous in time stamps. Their nano seconds time stamps are different and also rolling in different intervals. Do you think this is the reason why outcome are all zero? Is it compulsory that synchronous the time stamps precise to nano seconds? Here is info of my bag file image

Thanks for any advises.

Regards

johannes-graeter commented 5 years ago

Hi Claud,

sorry for the late repoly. I just recently changed my employer so some things got lost on the way...

First of all great that you made it run even htough no results come out yet :) First problem runtime: That feature matching and tracking takes too long (should be 1/10th of the time for images on kitti). I can think of 2 reasons: a) Your images are too big -> scale them down to 1 Megapixel b) you build it not in release

Second problem all zeros: Somehow no parameters are added to the SLAM problem as the ceres output suggests... Hard to debug that from here, but if you can do that, it would help if you could send me a sample of the data (host it somewhere so I can download) so I can have a look at it...

johannes-graeter commented 5 years ago

Hi Claud, did you try the suggestions and have some feedback ?

baladeer commented 5 years ago

Hi all, how do you disable the semantic

johannes-graeter commented 5 years ago

See issues #16 and #30 and come back to me with questions :)

Claud1234 commented 5 years ago

Hi all, how do you disable the semantic

Actually in the perspective make LIMO 'running' with your own dataset, the sematic is not compulsory. The minimum data that input bag should contain are grayscale images, camera_info, tf_static and point_clouds. I found with this four topics the LIMO already capable to running, but this can not promise the result.

Claud1234 commented 5 years ago

Hi Claud, did you try the suggestions and have some feedback ?

Hi. Thanks for concerning at the first. I was busy in another project recent days so did not check this page frequently.

Actually all the problems in my last post have been solved. I think the images scale and build are Ok. The problem is the timestamps of all topics. The reason that I got 0 in output and long term delay is the time stamps of images, camera_info and point clouds are not synchronous, like this: image You can see none of topics are synchronous inside the bag.

I succeed to solve synchronization issue. New bag is like this: image

Now I am able to get the effective result and there is no delay anymore. In general, LIMO has a very strict requirement of the synchronization .

Even though I can get the result, the result still not as perfect as the Kitti. The environment of our dataset is much 'fiercer' than the kitti. Another point is that our velodyne is in VLP-16 but kitti is VLP-54, so our point cloud is not as dense as the Kitti's dataset.

At present, it is only able to get the effective results from a short period of time at the beginning, then it will lost tracking of keyframes, the Path results goes to crazy as well.

I do not think this problem is still because of the configuration anymore but more based on the data itself. As I said before, our dataset more like an 'industrial data' compared with the Kitti's. There are a lot of reasons can result to this problem, for example, the contents in images and velodyne are not pair to each other.

I am curious to know have you ever tried this LIMO to other practical datasets but not kitti?

johannes-graeter commented 5 years ago

Hi Claud,

thanks for investing your time :) yes I tested it intensively while finalizing my PhD. I used two autonomous driving platforms: One with cameras and a Velodyne HDL64, using LIMO with 1 of the cameras and the velodyne and I didn't use semantics. The other setup was without LIDAR but with a stereo camera setup and semantics. Here the depth extraction from LIDAR was replaced by depth from stereo. The results on the first platform where of similar quality to KITTI and computing in real time, the stereo system was of lower quality but worked well.

But it was both driving platforms so the setup was kind of similar to KITTI. I am not sure how the system would perform on different robotic platforms though. The key changer here is the depth extraction of the tracklets (https://github.com/johannes-graeter/mono_lidar_depth/tree/master/monolidar_fusion). In the current code this the depth extraction relies on the dense 64 layer Velodyne and does depth interpolation for the camera tracklets with the PCL.

However if you only possess 16 layers that interpolation is not accurate any more and the heuristics implemented will fail. However I designed LIMO specifically so that modules are interchangeable, so infact what you can do (and what I did for stereo) is rewrite the depth extraction node so that it can treat 16 layers, which would be a great contribution. If you have a GPU on your system I personally would go for extracting dense flow (with https://github.com/lmb-freiburg/flownet2 or https://github.com/simonmeister/UnFlow or perhaps there is new ones out now :) ) and track the reprojected lidar points in the images for as many frames as possible. You could convert these in tracklet_depth_messages (as in this repo) and simply feed them into LIMO.

I have very high hopes for this approach, my few first tries with using machine learning based flow extraction for SLAM looked very promising :) Now I unfortunately do have only little spare time...

If you are interested we could share some ideas via mail or set up a skype meeting if you want :)

Regards,

Johannes