Closed weterking closed 6 months ago
This is the point cloud file I use 链接:https://pan.baidu.com/s/1h3f3R6hYZ6y6u-jPRXaESg 提取码:1024
Given the point cloud, you need to first obtain correspondences first, then feed the correspondences to the solver (around 100 to 500 correspondences will work well). Judging from the terminal output, the compatibility graph is large and dense, hence the max clique solver is slow. In addition, in cases with low outlier ratio, TEASER++ will take longer than those with high outlier ratio, due to the compatibility graph being dense.
Given the point cloud, you need to first obtain correspondences first, then feed the correspondences to the solver (around 100 to 500 correspondences will work well). Judging from the terminal output, the compatibility graph is large and dense, hence the max clique solver is slow. In addition, in cases with low outlier ratio, TEASER++ will take longer than those with high outlier ratio, due to the compatibility graph being dense.
I'm sorry to take your precious time, maybe my question is more basic, (1) I want to confirm whether teaser_cpp_ply can calculate the corresponding point; (2) What is compatibility graph? Your paper does not seem to have this word; (3) Why is it in the distance slower with low outlier rate, which seems counter-intuitivecounter-intuitiv
(1) I want to confirm whether teaser_cpp_ply can calculate the corresponding point;
No. You need to use a NN to estimate the correspondences.
(2) What is compatibility graph? Your paper does not seem to have this word;
It's the graph generated from TIMs.
(3) Why is it in the distance slower with low outlier rate, which seems counter-intuitivecounter-intuitiv
Because the maximum clique solver is the fastest on sparse graphs.
No. You need to use a NN to estimate the correspondences.
I wonder if NN refers to neural network?
I looked at the code teaser_cpp_fpfh.cc and teaser_cpp_ply.cc, for the addNoiseAndOutliers() function ` void addNoiseAndOutliers(Eigen::Matrix<double, 3, Eigen::Dynamic>& tgt) { // Add uniform noise Eigen::Matrix<double, 3, Eigen::Dynamic> noise = Eigen::Matrix<double, 3, Eigen::Dynamic>::Random(3, tgt.cols()) * NOISE_BOUND; NOISE_BOUND / 2; tgt = tgt + noise;
` Among them, the variable NOISE_BOUND, you have executed NOISE_BOUND/2; on it whether the original code you want to execute is: NOISE_BOUND/=2;
I used teaser_cpp_fpfh to test the same data set I uploaded before. I found that the error of the rotation matrix is acceptable, but the error of the translation vector is relatively large. Is this a disadvantage of the algorithm? I guess that when the error of the rotation matrix is small, there should be many algorithms that can calculate the translation matrix more accurately. Why is there such a large error?
No. You need to use a NN to estimate the correspondences.
I wonder if NN refers to neural network?
Yes.
I used teaser_cpp_fpfh to test the same data set I uploaded before. I found that the error of the rotation matrix is acceptable, but the error of the translation vector is relatively large. Is this a disadvantage of the algorithm? I guess that when the error of the rotation matrix is small, there should be many algorithms that can calculate the translation matrix more accurately. Why is there such a large error?
There's some issue with the included FPFH estimator. I recommend using Open3D's FPFH features and use the python binding.
I looked at the code teaser_cpp_fpfh.cc and teaser_cpp_ply.cc, for the addNoiseAndOutliers() function ` void addNoiseAndOutliers(Eigen::Matrix<double, 3, Eigen::Dynamic>& tgt) { // Add uniform noise Eigen::Matrix<double, 3, Eigen::Dynamic> noise = Eigen::Matrix<double, 3, Eigen::Dynamic>::Random(3, tgt.cols()) * NOISE_BOUND; NOISE_BOUND / 2; tgt = tgt + noise;
` Among them, the variable NOISE_BOUND, you have executed NOISE_BOUND/2; on it whether the original code you want to execute is: NOISE_BOUND/=2;
You don't seem to see this question, looking forward to answering it in your free time
I used teaser_cpp_fpfh to test the same data set I uploaded before. I found that the error of the rotation matrix is acceptable, but the error of the translation vector is relatively large. Is this a disadvantage of the algorithm? I guess that when the error of the rotation matrix is small, there should be many algorithms that can calculate the translation matrix more accurately. Why is there such a large error?
There's some issue with the included FPFH estimator. I recommend using Open3D's FPFH features and use the python binding.
According to your suggestion, I abandoned the FPFH feature, used the work of my lab for feature extraction, and used SVD and TEASER to test under the same corresponding point, the results are as follows, SVD is not as good as TEASER in terms of rotation, but SVD is not as good in terms of translation. The results are far superior to TEASER. SVD TEASER
I looked at the code teaser_cpp_fpfh.cc and teaser_cpp_ply.cc, for the addNoiseAndOutliers() function
void addNoiseAndOutliers(Eigen::Matrix<double, 3, Eigen::Dynamic>& tgt) { // Add uniform noise Eigen::Matrix<double, 3, Eigen::Dynamic> noise = Eigen::Matrix<double, 3, Eigen::Dynamic>::Random(3, tgt.cols()) * NOISE_BOUND; NOISE_BOUND / 2; tgt = tgt + noise;
Among them, the variable NOISE_BOUND, you have executed NOISE_BOUND/2; on it whether the original code you want to execute is: NOISE_BOUND/=2;You don't seem to see this question, looking forward to answering it in your free time
Great catch! It should be NOISE_BOUND/2 for the noise generation. Can you try the example again in this branch: https://github.com/MIT-SPARK/TEASER-plusplus/tree/bugfix/examples
I used teaser_cpp_fpfh to test the same data set I uploaded before. I found that the error of the rotation matrix is acceptable, but the error of the translation vector is relatively large. Is this a disadvantage of the algorithm? I guess that when the error of the rotation matrix is small, there should be many algorithms that can calculate the translation matrix more accurately. Why is there such a large error?
There's some issue with the included FPFH estimator. I recommend using Open3D's FPFH features and use the python binding.
According to your suggestion, I abandoned the FPFH feature, used the work of my lab for feature extraction, and used SVD and TEASER to test under the same corresponding point, the results are as follows, SVD is not as good as TEASER in terms of rotation, but SVD is not as good in terms of translation. The results are far superior to TEASER. SVD TEASER
Can you check how many inliers are there for TEASER? It is possible that the noise bound is too tight. If you are using the example code, then see the comment above, as the noise generation was using the incorrect bound.
I used teaser_cpp_fpfh to test the same data set I uploaded before. I found that the error of the rotation matrix is acceptable, but the error of the translation vector is relatively large. Is this a disadvantage of the algorithm? I guess that when the error of the rotation matrix is small, there should be many algorithms that can calculate the translation matrix more accurately. Why is there such a large error?
There's some issue with the included FPFH estimator. I recommend using Open3D's FPFH features and use the python binding.
According to your suggestion, I abandoned the FPFH feature, used the work of my lab for feature extraction, and used SVD and TEASER to test under the same corresponding point, the results are as follows, SVD is not as good as TEASER in terms of rotation, but SVD is not as good in terms of translation. The results are far superior to TEASER. SVD TEASER
Can you check how many inliers are there for TEASER? It is possible that the noise bound is too tight. If you are using the example code, then see the comment above, as the noise generation was using the incorrect bound.
There is a problem, your noise bound originally wanted to use noise_bound /=2;, but it was used as noise_bound /2;.The noise bound is not getting smaller. According to intuition, the noise boundary should become looser.
Even I see your latest code, you define noise_bound as 0.001(Originally 0.05), isn't this the tighter bound?
The original noise bound was 0.05, and I expanded him 4 times. This is the result of four times To figure it out, I scaled it down by a factor of four This is the result The effect is actually not ideal
You need to understand that the FPFH example uses the noise bound for generating random noise, and setting up the solver. FPFH's behavior depends on the search radius, and the old value 0.005 was too large for it to have satisfactory performance.
In my previous comment about noise bound being too tight, I was referring to your test on your own point cloud data, which is separate from the FPFH example. For your test, you probably do not want to actually add noise to your data.
In my previous comment about noise bound being too tight, I was referring to your test on your own point cloud data, which is separate from the FPFH example. For your test, you probably do not want to actually add noise to your data.
What you mean is that my data needs a looser boundary? But I quadrupled the bound and the effect got worse。
The larger the bound, TEASER++ will behave more similar to a solution obtained with SVD, as the GNC will essentially terminate at the first iteration. You can also try to estimate the ground truth outlier ratio here, by essentially calculating how many correspondences are close enough after ground truth rotation and translation, and see whether TEASER++ is selecting the correct inliers. If TEASER++ is selecting the correct inliers, but the result is still bad, then there might be some hidden bugs in the solver. If on the other hand TEASER is selecting the incorrect inliers (even after tuning the thresholds), then that means your data is somehow violating our assumptions about either isotropic noise, or the point cloud is not from rigid objects.
If there are no further questions, we will close the issue :) Thx
I have read the docs and papers but I still don't know why the program is running so slow
This is my changed code, the S,T point cloud is processed by myself, the size is 10240 ` // An example showing TEASER++ registration with the Stanford bunny model
include
include
include
include <Eigen/Core>
include <teaser/ply_io.h>
include <teaser/registration.h>
// Macro constants for generating noise and outliers
define NOISE_BOUND 0.05
define N_OUTLIERS 1700
define OUTLIER_TRANSLATION_LB 5
define OUTLIER_TRANSLATION_UB 10
inline double getAngularError(Eigen::Matrix3d R_exp, Eigen::Matrix3d R_est) { return std::abs(std::acos(fmin(fmax(((R_exp.transpose() * R_est).trace() - 1) / 2, -1.0), 1.0))); }
void addNoiseAndOutliers(Eigen::Matrix<double, 3, Eigen::Dynamic>& tgt) { // Add uniform noise Eigen::Matrix<double, 3, Eigen::Dynamic> noise = Eigen::Matrix<double, 3, Eigen::Dynamic>::Random(3, tgt.cols()) * NOISE_BOUND; NOISE_BOUND / 2; tgt = tgt + noise;
// Add outliers std::random_device rd; std::mt19937 gen(rd()); std::uniform_int_distribution<> dis2(0, tgt.cols() - 1); // pos of outliers std::uniform_int_distribution<> dis3(OUTLIER_TRANSLATION_LB, OUTLIER_TRANSLATION_UB); // random translation std::vector expected_outlier_mask(tgt.cols(), false);
for (int i = 0; i < N_OUTLIERS; ++i) {
int c_outlier_idx = dis2(gen);
assert(c_outlier_idx < expected_outlier_mask.size());
expected_outlier_mask[c_outlier_idx] = true;
tgt.col(c_outlier_idx).array() += dis3(gen); // random translation
}
}
int main() { // Load the .ply file teaser::PLYReader reader; teaser::PointCloud src_cloud; auto status = reader.read("./example_data/left_face_10240.ply", src_cloud); int N = src_cloud.size();
// Convert the point cloud to Eigen Eigen::Matrix<double, 3, Eigen::Dynamic> src(3, N); for (size_t i = 0; i < N; ++i) { src.col(i) << src_cloud[i].x, src_cloud[i].y, src_cloud[i].z; }
// Homogeneous coordinates Eigen::Matrix<double, 4, Eigen::Dynamic> src_h; src_h.resize(4, src.cols()); src_h.topRows(3) = src; src_h.bottomRows(1) = Eigen::Matrix<double, 1, Eigen::Dynamic>::Ones(N);
// Apply an arbitrary SE(3) transformation Eigen::Matrix4d T; // clang-format off T << 9.96926560e-01, 6.68735757e-02, -4.06664421e-02, -1.15576939e-01, -6.61289946e-02, 9.97617877e-01, 1.94008687e-02, -3.87705398e-02, 4.18675510e-02, -1.66517807e-02, 9.98977765e-01, 1.14874890e-01, 0, 0, 0, 1; // clang-format on
// Apply transformation /Eigen::Matrix<double, 4, Eigen::Dynamic> tgt_h = T src_h; Eigen::Matrix<double, 3, Eigen::Dynamic> tgt = tgt_h.topRows(3);
// Add some noise & outliers addNoiseAndOutliers(tgt);*/
//add code by dsy //teaser::PLYReader reader; teaser::PointCloud tgt_cloud; auto status1 = reader.read("./example_data/right_face_10240.ply", tgt_cloud); int M = tgt_cloud.size(); Eigen::Matrix<double, 3, Eigen::Dynamic> tgt(3, M); for (size_t i = 0; i < M; ++i) { tgt.col(i) << tgt_cloud[i].x, tgt_cloud[i].y, tgt_cloud[i].z; }
Eigen::Matrix<double, 4, Eigen::Dynamic> tgt_h; tgt_h.resize(4, tgt.cols()); tgt_h.topRows(3) = tgt; tgt_h.bottomRows(1) = Eigen::Matrix<double, 1, Eigen::Dynamic>::Ones(N);
tgt_h = T * tgt_h; tgt = tgt_h.topRows(3); std::cout << "Run TEASER++ registration" << std::endl; // Run TEASER++ registration // Prepare solver parameters teaser::RobustRegistrationSolver::Params params; params.noise_bound = NOISE_BOUND; params.cbar2 = 1; params.estimate_scaling = false; params.rotation_max_iterations = 100; params.rotation_gnc_factor = 1.4; params.rotation_estimation_algorithm = teaser::RobustRegistrationSolver::ROTATION_ESTIMATION_ALGORITHM::GNC_TLS; params.rotation_cost_threshold = 0.005;
// Solve with TEASER++ std::cout << "Solve with TEASER++" << std::endl; teaser::RobustRegistrationSolver solver(params); std::chrono::steady_clock::time_point begin = std::chrono::steady_clock::now(); solver.solve(src, tgt); std::chrono::steady_clock::time_point end = std::chrono::steady_clock::now();
auto solution = solver.getSolution();
// Compare results std::cout << "=====================================" << std::endl; std::cout << " TEASER++ Results " << std::endl; std::cout << "=====================================" << std::endl; std::cout << "Expected rotation: " << std::endl; std::cout << T.topLeftCorner(3, 3) << std::endl; std::cout << "Estimated rotation: " << std::endl; std::cout << solution.rotation << std::endl; std::cout << "Error (deg): " << getAngularError(T.topLeftCorner(3, 3), solution.rotation) << std::endl; std::cout << std::endl; std::cout << "Expected translation: " << std::endl; std::cout << T.topRightCorner(3, 1) << std::endl; std::cout << "Estimated translation: " << std::endl; std::cout << solution.translation << std::endl; std::cout << "Error (m): " << (T.topRightCorner(3, 1) - solution.translation).norm() << std::endl; std::cout << std::endl; std::cout << "Number of correspondences: " << N << std::endl; std::cout << "Number of outliers: " << N_OUTLIERS << std::endl; std::cout << "Time taken (s): " << std::chrono::duration_cast(end - begin).count() /
1000000.0
<< std::endl;
}
`