zhixy / multical

Multiple targets for multiple IMUs, cameras and LiDARs (Lasers) spatiotemporal calibration
Other
159 stars 31 forks source link

Questions regarding multiple lidar calibration. #1

Open Doodle1106 opened 3 years ago

Doodle1106 commented 3 years ago

Hi,

Thanks for this great work! Are there any plans to release the paper that you've been working on? I am quite interested in how the extrinsics between IMUs, Cameras and LiDARs are found. I would really appreciate it if you guys can shed some light here.

Thanks!

zhixy commented 3 years ago

Hi, thx for your attention, we are working on some thorough experiments to evaluation our algorithms, after finishing these experiments, a paper will come soon.

alexmora-sr commented 3 years ago

Hello,

Are there updates in regards to the paper mentioned above? I have successfully calibrated my laser-visual-inertial sensor system using multical and would also like to know more details about the reasoning behind it.

Huge thanks for open-sourcing your software, it will become a great addition to the community!

zhixy commented 3 years ago

Hi, glad to hear that multical help you research! Yeah, the paper is a little bit late, mainly because I have job to do, so very less spare time to work on the experiments and paper. But anyway, the experiments are done, the paper are almost finished.

Doodle1106 commented 3 years ago

@zhixy

I tried to follow along this objective function, but I don't quite understand how lidar extrinsic is found. What does it try to optimize here? Is all 6dof sensor extrinsic fully observable? Would you please walk me through this part?

Thanks in advance.

    def addLiDARErrorTerms(self, problem, poseSplineDv):
        for idx, obs in enumerate(self.targetObs):
            obs.errorTerms = []
            plane_normal = aopt.EuclideanExpression(np.array([0.0, 0.0, 1.0]))

            for i in obs.inliers:
                tk = self.lidarOffsetDv.toExpression() + self.lidarData[i, 3]

                # T_world_to_body
                T_w_b = poseSplineDv.transformationAtTime(
                    tk, self.timeOffsetPadding, self.timeOffsetPadding)

                # point in lidar frame
                point = self.lidarData[i, 0:3]
                distance = np.linalg.norm(point)
                dir_l = point / distance

                T_b_l = self.T_l_b_Dv.toExpression().inverse()
                T_w_l = T_w_b * T_b_l

                # T_plane_to_lidar
                T_p_l = self.planes[idx].T_p_w_Dv.toExpression() * T_w_l
                C_p_l = T_p_l.toRotationExpression()
                t_p = T_p_l.toEuclideanExpression()

                # lidar height in target plane frame?
                d = plane_normal.dot(t_p)

                theta = plane_normal.dot(C_p_l * dir_l)

                predictedMeasurement = d / theta * -1.0

                if predictedMeasurement.toScalar() < 0.:
                    predictedMeasurement = predictedMeasurement * -1.0
                    print "Swapped sign! This should not happen normally!"

                error = ket.ScalarError(distance, self.invR,
                                        predictedMeasurement)

                obs.errorTerms.append(error)
                problem.addErrorTerm(error)