Unsigned-Long / iKalibr

iKalibr: Unified Targetless Spatiotemporal Calibration for Resilient Integrated Inertial Systems
https://github.com/Unsigned-Long/iKalibr.git
Other
164 stars 24 forks source link

Translations are all zero #3

Closed kao-yang closed 2 months ago

kao-yang commented 3 months ago

Nice Work! I'm try to use this package to calibrate my sensors. Sensor list is three cameras, one lidar and one imu. I use roslaunch to run this package. There are some issue confuse me:

你好, 我在尝试使用iKalibr来对传感器外参校准。 我的传感器配置为:三个相机、一个雷达、一个IMU。我使用roslaunch来启动该包。 想请作者解答几个疑问:

Unsigned-Long commented 3 months ago

Spatiotemporal parameters files in folder iteration/epoch/* or iteration/stage/* (e.g., ikalibr_param_x.yaml) does not represent the calibration result, but the temporary spatiotemporal parameters output by iKalibr after each sub-step of the calibration process is completed, if the user specifies the output option of Preference::Outputs::ParamInEachIter in the main configure file.

If the sensor suite you are calibrating has cameras, then SfM reconstruction is required by iKalibr. This is how it works:

Attention: colmap is necessary if you are using iKalibr for camera calibration. In the new version of iKalibr (v1.2.0), we have also included the recently open-sourced glomap (a global SfM reconstruction toolbox based on colmap). glocmp is faster than colmap and has comparable results. If you are calibrating cameras using iKalibr, we strongly recommend that you update iKalibr to v1.2.0 (use git pull and then recompile in the catkin workspace) and configure the glomap environment for fast SfM reconstruction.

Good tidings to you! Any question or problem, welcome to issue us! :smiley:

Unsigned-Long commented 3 months ago

More details, please see here!

kao-yang commented 3 months ago

I'm trying run colmap command in sfm-command-line.txt, but Abort occur in last step. Bellow is my operation process: Run colmap command (WAY 1):

colmap gui --database_path /home/sch/my_learn/iKalibr/src/ikalibr/output/3/images/ob_camera_back/left_ir/image_raw/sfm_ws/database.db --image_path /home/sch/my_learn/iKalibr/src/ikalibr/output/3/images/ob_camera_back/left_ir/image_raw/images

When all image run over in gui , I just direct close gui(CTRL_C). And then I run the last step:

colmap model_converter --input_path /home/sch/my_learn/iKalibr/src/ikalibr/output/3/images/ob_camera_back/left_ir/image_raw/sfm_ws/0 --output_path /home/sch/my_learn/iKalibr/src/ikalibr/output/3/images/ob_camera_back/left_ir/image_raw/sfm_ws --output_type TXT

Then, the error occur: image

As it is my first time to run colmap, I don't know whether shoud I do something in gui to save some file?

Unsigned-Long commented 3 months ago

Yes, if you use colmap gui to perform SfM, you need to save the result files manually after completion. Then model_converter can convert the output binary file to TXT format so that ikalibr can read them later.

I suggest you use the second way, that is, use colmap mapper, which will directly output the SfM result files to the specified folder, and then you can directly convert them with model_converter.

As suggested before, if possible, it is recommended to use glomap for reconstruction, which is very fast and has the comparable results as colmap.

kao-yang commented 3 months ago

Thanks, I will try Way 2 and Way 3 tonight. One more question, what dose TEMPORAL result really mean?

camera_msg_time_stamp + TO_CmToBr = really_camera_time

Is it above right?

Unsigned-Long commented 3 months ago

You can understand it as you said. The following way may be easier to understand: image time stamped by camera (Cm) clock + TO_CmToBr = image time stamped by the reference IMU (Br) colock

where TO_CmToBr means the time offset from Cm to Br.

kao-yang commented 3 months ago

I tried WAY 3 yesterday, and it works!

The External parameter between IMU and Lidar seems well. But the translation, especially in 'z' axis, between cameras seems not well. As the really translation of 'z' is about 0.09m, the estimated result is 0.03m.

Because the robot is really heavy, I hall the robot, then swing and move it to calibration.

When motion in 'z' axis is not big, can iKalibr work well for cameras ? Can U give me some suggestions about the motion while calibration, or params in yaml file ?

Sensors are Xsense (br imu), Livox MID-360(/livox/lidar, /livox/imu) , and three Orbbic Gemini355L camera. Bellow is my params:

Configor:
  DataStream:
    # key: IMU topic, value: IMU type. Supported IMU types are:
    #   1. SENSOR_IMU: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Imu.html (acce unit: m/s^2)
    #   2.    SBG_IMU: https://github.com/SBG-Systems/sbg_ros_driver.git (acce unit: m/s^2)
    #   3. SENSOR_IMU_G: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Imu.html (acce unit: G)
    IMUTopics:
      # at least one IMU is integrated in the sensor suite
      - key: "/imu_tmp"
        value:
          Type: "SENSOR_IMU"
          # the inertial intrinsic filename, a default template (identity) is provided in 'ikalibr/config'
          # if you want to perform IMU-only multi-IMU calibration, you are expected to pre-calibrate
          # the intrinsics of your IMUs using program 'ikalibr_imu_intri_calib'. Otherwise, just use the default
          # one, since for other sensor suites, such as visual-inertial, radar-inertial, etc.,
          # inertial intrinsics would be optimized in spatiotemporal calibration
          Intrinsics: "/home/sch/my_learn/iKalibr/src/ikalibr/config/lite3/imu-intri.yaml"
          # the weights should be set based on the noise level of your own IMUs
          # large noise level points to small weight
          AcceWeight: 17.68
          GyroWeight: 57.66
      # the second IMU, if only one IMU is integrated, just comment out the following key item
      - key: "/livox/imu"
        value:
          Type: "SENSOR_IMU_G"
          Intrinsics: "/home/sch/my_learn/iKalibr/src/ikalibr/config/lite3/imu-intri.yaml"
          AcceWeight: 5.68
          GyroWeight: 20.66
    # key: radar topic, value: radar type. Supported radar types are:
    #   1.     AINSTEIN_RADAR: https://github.com/AinsteinAI/ainstein_radar.git
    #   2. AWR1843BOOST_RAW: https://github.com/Unsigned-Long/ti_mmwave_rospkg.git
    #   3. AWR1843BOOST_CUSTOM: https://github.com/Unsigned-Long/ti_mmwave_rospkg.git
    #   4.  POINTCLOUD2_POSV: 'sensor_msgs/PointCloud2' with point format: [x, y, z, velocity]
    #   5.  POINTCLOUD2_POSIV: 'sensor_msgs/PointCloud2' with point format: [x, y, z, intensity, velocity]
    #   6.  POINTCLOUD2_XRIO: 'sensor_msgs/PointCloud2' with x-RIO point format (see https://github.com/christopherdoer/rio.git)
    RadarTopics:
      # if no radar is integrated in your sensor suite, just comment out the following key items
      # - key: "/radar0/scan"
      #   value:
      #     Type: "AWR1843BOOST_CUSTOM"
      #     Weight: 10.0
      # - key: "/radar1/scan"
      #   value:
      #     Type: "AWR1843BOOST_CUSTOM"
      #     Weight: 10.0
    #   1.        Velodyne LiDARs: VLP_16_PACKET, VLP_POINTS
    #   2.          Ouster LiDARs: OUSTER_POINTS
    #   3. Hesai Pandar XT LiDARs: PANDAR_XT_POINTS
    #   4.           Livox LiDARs: LIVOX_CUSTOM (the official 'xfer_format'=1, mid-360 and avia is recommend)
    LiDARTopics:
      # if no LiDAR is integrated in your sensor suite, just comment out the following key items
      - key: "/livox/lidar"
        value:
          Type: "LIVOX_CUSTOM"
          Weight: 100.0
    # key: camera topic, value: camera type. Supported camera types:
    #   1. SENSOR_IMAGE_GS: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html\n"
    #   2. SENSOR_IMAGE_RS_FIRST: first-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html\n"
    #   3. SENSOR_IMAGE_RS_MID: middle-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html\n"
    #   4. SENSOR_IMAGE_RS_LAST: last-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/Image.html\n"
    #   5. SENSOR_IMAGE_COMP_GS: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html\n"
    #   6. SENSOR_IMAGE_COMP_RS_FIRST: first-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html\n"
    #   7. SENSOR_IMAGE_COMP_RS_MID: middle-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html\n"
    #   8. SENSOR_IMAGE_COMP_RS_LAST: last-row exposure, https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CompressedImage.html\n"
    CameraTopics:
      # if no camera is integrated in your sensor suite, just comment out the following key items
      - key: "/ob_camera_back/left_ir/image_raw"
        value:
          Type: "SENSOR_IMAGE_GS"
          # the camera intrinsic filename, default templates are provided in 'ikalibr/config'
          # two types of optical cameras are support, namely 'pinhole-brown' and 'pinhole-fisheye'
          # these camera intrinsics are required pre-calibrated
          # Kalibr is recommended to perform intrinsic calibration (https://github.com/ethz-asl/kalibr.git)
          Intrinsics: "/home/sch/my_learn/iKalibr/src/ikalibr/config/lite3/back.yaml"
          Weight: 50.0
          # use visual features that are tracked larger than 'TrackLengthMin'
          # smaller value means less features are considered in solving
          TrackLengthMin: 10
      - key: "/ob_camera_front/left_ir/image_raw"
        value:
          Type: "SENSOR_IMAGE_GS"
          Intrinsics: "/home/sch/my_learn/iKalibr/src/ikalibr/config/lite3/front.yaml"
          Weight: 50.0
          TrackLengthMin: 10
      - key: "/ob_camera_waist/left_ir/image_raw"
        value:
          Type: "SENSOR_IMAGE_GS"
          Intrinsics: "/home/sch/my_learn/iKalibr/src/ikalibr/config/lite3/waist.yaml"
          Weight: 50.0
          TrackLengthMin: 10
    # the reference IMU, it should be one of multiple IMUs (the ros topic of the IMU)
    ReferIMU: "/imu_tmp"
    BagPath: "/home/sch/bag/lite3/calib/1.bag"
    # the time piece: [BegTime, BegTime + Duration], unit: second(s)
    # if you want to use all time data for calibration, please set them to negative numbers
    # Note that the 'BegTime' here is measured from the start time of bag
    # and is not related to the timestamp of the data recorded in the package.
    # an example where multi-sensor data in time range [40, 80] is connected to excited motion
    BeginTime: -1
    Duration: -1
    OutputPath: "/home/sch/my_learn/iKalibr/src/ikalibr/output/lite3/1"
  Prior:
    # unit: m/s^2
    GravityNorm: 9.81
    # priori about spatiotemporal parameters, given by corresponding config file path
    SpatTempPrioriPath: ""
    # if sensor are hardware-synchronized, you could choose to fix temporal parameters
    # by setting this field to 'false'
    OptTemporalParams: true
    # the range where the time offsets would be optimized.
    # make sure this range contains the ground truth of time offsets
    # If you're not sure, make this field large, but this could lead to longer optimization time
    TimeOffsetPadding: 0.10
    # readout time padding for RS camera, make sure this range contains the ground truth
    ReadoutTimePadding: 0.01
    # the time distance of two neighbor control points, which determines the accuracy
    # of the representation of the B-splines. Smaller distance would lead to longer optimization time
    # common choices: from '0.01' to '0.10'
    KnotTimeDist:
      SO3Spline: 0.05
      ScaleSpline: 0.05
    # when lidar is involved in the calibration framework, the ndt odometer is employed to recover pose roughly
    NDTLiDAROdometer:
      # 0.5 for indoor case and 1.0 for outdoor case
      Resolution: 0.5
      KeyFrameDownSample: 0.1
    LiDARDataAssociate:
      # leaf size when down sample the map using 'pcl::VoxelGrid' filter
      # note that this field just for visualization, no connection with calibration
      # for outdoor, 0.1 is suggested, and for indoor: 0.05 is suggested
      MapDownSample: 0.05
      # associate point and surfel when distance is less than this value
      PointToSurfelMax: 0.1
      # chose plane as a surfel for data association when planarity is larger than this value
      PlanarityMin: 0.6
    # the loss function used for radar factor (m/s)
    CauchyLossForRadarFactor: 0.1
    # the loss function used for lidar factor (m)
    CauchyLossForLiDARFactor: 0.5
    # the loss function used for visual reprojection factor (pixel)
    CauchyLossForCameraFactor: 1.0
  Preference:
    # whether using cuda to speed up when solving least-squares problems
    # if you do not install the cuda dependency, set it to 'false'
    UseCudaInSolving: false
    # currently available output content:
    # ParamInEachIter, BSplines, LiDARMaps, VisualMaps, RadarMaps, HessianMat,
    # VisualLiDARCovisibility, VisualKinematics, ColorizedLiDARMap
    # AlignedInertialMes, VisualReprojErrors, RadarDopplerErrors
    # NONE, ALL
    Outputs:
      # - LiDARMaps
      # - VisualMaps
      # - RadarMaps
      # - VisualLiDARCovisibility
      # - VisualKinematics
      # - ColorizedLiDARMap
      # - AlignedInertialMes
      # - VisualReprojErrors
      # - RadarDopplerErrors
      - ALL
    # supported data output format:
    # 1. JSON
    # 2. XML
    # 3. YAML
    # 4. BINARY (not recommended)
    # do not dwell on it, we have provided an additional ros program to perform data format transform
    OutputDataFormat: "YAML"
    # number of thread to use for solving, negative value means use all valid thread to perform solving
    ThreadsToUse: -1
    # scale of splines in viewer, you can also use 'a' and 'd' keys to
    # zoom out and in splines in run time
    SplineScaleInViewer: 3.0
    # scale of coordinates in viewer, you can also use 's' and 'w' keys
    # to zoom out and in coordinates in run time
    CoordSScaleInViewer: 0.3
Unsigned-Long commented 3 months ago

iKalibr is a motion-based spatiotemporal calibrator, so sufficient excitation of motion is necessary for high-precision calibration. When the excitation is insufficient, the observability of the relevant spatiotemporal parameters will be poor (or even without observability), which will lead to unsatisfactory parameter calibration results.

It can be said that this motion-based feature is both an advantage and a disadvantage of iKalibr :thinking: . It allows iKalibr to calibrate time offsets, and dose not require targets. However, for large sensor suites (such as ground mobile carts), it is difficult for them to obtain sufficient excitation for calibration. (Attention: this disadvantage holds for any other motion-based calibrator).

If possible, please follow the motion excitation demonstration video we give here for data collection. In fact, if possible, you can download our open source dataset (any rosbag) (or TUM GS-RS dataset), and observe how we collect data (by observing the image sequence from the camera using rosbag play, such as how fast the shaking speed is and how long the collection takes).

kao-yang commented 2 months ago

iKalibr is a motion-based spatiotemporal calibrator, so sufficient excitation of motion is necessary for high-precision calibration. When the excitation is insufficient, the observability of the relevant spatiotemporal parameters will be poor (or even without observability), which will lead to unsatisfactory parameter calibration results.

It can be said that this motion-based feature is both an advantage and a disadvantage of iKalibr 🤔 . It allows iKalibr to calibrate time offsets, and dose not require targets. However, for large sensor suites (such as ground mobile carts), it is difficult for them to obtain sufficient excitation for calibration. (Attention: this disadvantage holds for any other motion-based calibrator).

If possible, please follow the motion excitation demonstration video we give here for data collection. In fact, if possible, you can download our open source dataset (any rosbag) (or TUM GS-RS dataset), and observe how we collect data (by observing the image sequence from the camera using rosbag play, such as how fast the shaking speed is and how long the collection takes).

What really confuse me is that, the lidar and imu calibration is great, but the cameras' z axis calibration do not work well. Is it right that camera need more sufficient exercise motivation?

Unsigned-Long commented 2 months ago

Cameras and lidar are two popular external sensors for ego-motion estimation. The biggest difference between them is that monocular cameras cannot perceive the scale of the real world. This feature makes cameras have more problems compared to lidar in many tasks, such as scale drift in typical VIO systems (although VIO can theoretically estimate the absolute scale). In fact, in the calibration problem, the translation is also a scale-related quantity, and due to this, general camera-related calibrators often use a chessboard to introduce the real scale. According to what you said, the poor calibration of translation may be related to many factors, such as motion excitation, image quality (affecting the accuracy of feature point extraction), and the calibration quality of camera distortion parameters.

Tips: iKalibr performs well on all data series of the TUM GS-RS dataset, and you can get some inspiration from it if possible. The corresponding calibration results of this dataset using iKalibr can be found here.

kao-yang commented 2 months ago

Thans for your reply! One more question, can I use Stereo cameras to make camera scale more accuracy? Should I set param SpatTempPrioriPath to set Stereo camera baseline between two lens?

Unsigned-Long commented 2 months ago

Yes, you can do this. This is supported in iKalibr v1.1.0. If you are confident about your prior parameters, you can pass them to iKalibr via field SpatTempPrioriPath (its template configure file can be found here), and they will be maintained in the final calibration result (Just configure the parameters you know and are sure about it, such as the translation between the two cameras.).

Unsigned-Long commented 2 months ago

A man wants to close this issue.