kangqi-ni / multisensor-fusion-for-localization-and-mapping

A framework for multisensor fusion localization and mapping
3 stars 2 forks source link

backend optimizaiton without gnss prior pose? #1

Closed michael-sz closed 4 weeks ago

michael-sz commented 2 years ago

image

sliding_window.cpp

kangqi-ni commented 2 years ago

The map matching pose and GNSS prior pose are essentially the same thing. I use both terms interchangeably here. Let me know if you have more questions!

michael-sz commented 2 years ago

image sorry,i still feel confused. gnss prior pose is from gps device, and map matching pose is the localization based on point cloud map,so could they be the same thing? hope your reply,thanks.

kangqi-ni commented 2 years ago

Sorry for creating the confusion. I meant map_matching_pose is the NDT laser_pose after scan context correction. The gnss data in the KITTI rosbag was used as ground truth comparison, so I treated map_matching_pose as the GNSS prior pose instead since I needed some globally consistent measurement in the optimization. If your dataset contains multiple GNSS data, you should add new constraints into the optimization which can be very easily done in this framework.

michael-sz commented 2 years ago

Got it! if i want to add new gps constraint into the optimization, can i directly apply 'AddPRVAGMapMatchingPoseFactor' function to add , or i need to write some new pose factor fuction?

michael-sz commented 2 years ago

by the way, during the process of long-term localization(such as 7x24h) , if the data was continually inserted into back_end optimization(ceres) module, the time of dealing with optimization will become longer and longer?

kangqi-ni commented 2 years ago

Using AddPRVAGMapMatchingPoseFactor is fine. This function should be able to deal with any prior pose constraint. Also, performing long-term localization would not increase the optimization time in an unbounded way. This backend uses the sliding window to marginalize old constraints, so the optimization would not consider an unbounded number of key frames but still retain their information. You can play around the size parameter of the sliding window in sliding_window.yaml.

michael-sz commented 2 years ago

thanks for reply! meanwhile, when i run the code , i found the acc and gyro bias of imu is zero after backend optimization. i tried to change the initial acc and gyro bias of imu, but after backend optimization it was still zero...

michael-sz commented 2 years ago

and i found if i turn the parameter "imu_pre_integration" in the sliding_window.yaml into false, the result of optimized_odometry will get closer to the truth. so i guess that the result of optimization will get worse without the optimization of the imu's acc and gyro bias?

kangqi-ni commented 2 years ago

Are you using your personal dataset for testing this framework? For the KITTI dataset I am using, the imu bias gets updated after each optimization. I am not entirely sure what the problem might be.

michael-sz commented 2 years ago

i am using kitti dataset. maybe the way that i get updated bias is wrong, every time after optimization, the bias value i got is zero. please tell me the way you get the updated bias value , thanks.

kangqi-ni commented 2 years ago

Notice optimized_keyframes in ceres_sliding_window.hpp stores all the optimized key frame parameters (position, orientation, velocity, acc bias, gyro bias). The ith key frame's parameters are stored at optimized_keyframes[i].prvag. This is the code I used to print out the imu bias of the key frame at an index.

for (int i = 9; i < 15; ++i) {
          std::cout << optimized_key_frames_[index].prvag[i] << ' ';  
}
michael-sz commented 2 years ago

Got it ,thanks. the follow image is my way to get updated bias, please help me check it, thanks. image

kangqi-ni commented 2 years ago

The lastest_optimized_key_frame should be an OptimizedKeyFrame object as defined in ceres_sliding_window.hpp.

    struct OptimizedKeyFrame {
      double time;
      double prvag[15];
      bool fixed = false;
    };

The prvag is a double array, which I do not think that it has the b_a or b_g attributes you are trying to invoke if you did not modify that part of the code. prvag stores the parameters as following: pos, ori, vel, b_a, b_g, so the 9-14 indices in the array correspond to the imu bias. You can do the following to get the bias:

Eigen::Vector3d b_a (latest_optimized_key_frame.prvag[9], 
                        latest_optimized_key_frame.prvag[10], 
                        latest_optimized_key_frame.prvag[11]);
Eigen::Vector3d b_g (latest_optimized_key_frame.prvag[12], 
                        latest_optimized_key_frame.prvag[13], 
                        latest_optimized_key_frame.prvag[14]);

std::cout << "acc bias: " << b_a.transpose() << std::endl;
std::cout << "gyro bias: " << b_g.transpose() << std::endl;
michael-sz commented 2 years ago

got it, i will try, thanks. nevertheless, the value of imu bias has been updated after last optimization. and i assume that the value of imu bias is 'A'.

i found that the imu initial bias in the next optimization was not updated to 'A'? just like reset bias use the newly optimized bias

kangqi-ni commented 2 years ago

Are you saying after an optimization, the optimized bias is not used as the initial value for the next optimization? Could you tell me how did you discover that?

michael-sz commented 2 years ago

image image when it has a new key frame, it will start optimization. so the 'reset' function will give imu initial bias a new value, but the new value is not 'A'

michael-sz commented 2 years ago

image

according to your way , i tried to print imu bias value after optimization, but the value is still zero.

image

kangqi-ni commented 1 year ago

You are correct. The bias of the new key frame would be set as 0 instead of the previously optimized value. You can modify the algorithm to take in the latest bias value and compare the optimization result after the modification. Also, from the screenshot, it seems like the optimization is not successful because the initial cost and final cost are exactly the same. The final cost should be significantly smaller than the initial cost if the optimization is successful. In README.md, I have added the link of the rosbag that I used to test the framework. You can see what the result should look like by using my dataset first.