Closed GeraldLodron closed 9 years ago
@andersgb1 do you have an idea here?
i also noticed a small (maybe bug) ugly thing:
if you have MSVC the Cmake automatically sets PCL_ONLY_CORE_POINT_TYPES as compiler preprocessor flag regardless to the cmakegui checkbox PCL_ONLY_CORE_POINT_TYPES. But in the pcl_config.h the output is /* #undef PCL_ONLY_CORE_POINT_TYPES */ if you do not check the cmakegui checkbox, only if you check that checkbox the output will come to
(only ugly, i cannot see any errors causing by that)
I must admit, I don't fully understand the problem. Is it within my class, or is it about the use of the macro for compiling only core types?
On Thu, Oct 15, 2015 at 10:51 AM, GeraldLodron notifications@github.com wrote:
i also noticed a small (maybe bug) ugly thing:
if you have MSVC the Cmake automatically sets PCL_ONLY_CORE_POINT_TYPES as compiler preprocessor flag regardless to the cmakegui checkbox PCL_ONLY_CORE_POINT_TYPES. But in the pcl_config.h the output is /* #undef PCL_ONLY_CORE_POINT_TYPES */ if you do not check the cmakegui checkbox, only if you check that checkbox the output will come to
define PCL_ONLY_CORE_POINT_TYPES
(only ugly, i cannot see any errors causing by that)
— Reply to this email directly or view it on GitHub https://github.com/PointCloudLibrary/pcl/issues/1379#issuecomment-148321671 .
No it has not neccessarely to do with the macro.i only thought that it is maybe the reason because that macro was the only big change between the two versions i could see. Priviously, before the macro was needed to be enabled, it was possible to use both point types...
But I didn't introduce that macro..?
On Fri, Oct 16, 2015 at 8:25 PM, GeraldLodron notifications@github.com wrote:
No it has not neccessarely to do with the macro.i only thought that it is maybe the reason because that macro was the only big change between the two versions i could see. Priviously, before the macro was needed to be enabled, it was possible to use both point types...
— Reply to this email directly or view it on GitHub https://github.com/PointCloudLibrary/pcl/issues/1379#issuecomment-148793709 .
I also not :-)
So in summary FORGET what i wrote about the macro. The prerective registration does not work with pointnormals, other filters work with them so there is something wrong with it...
Can you reproduce the issue?
So, if I understand you correctly, the only thing I'm supposed to do here to reproduce the error is to clone the head of PCL, compile it and run the test_registration?
On Sat, Oct 17, 2015 at 6:19 PM, GeraldLodron notifications@github.com wrote:
I also not :-)
So in summary FORGET what i wrote about the macro. The prerective registration does not work with pointnormals, other filters work with them so there is something wrong with it...
Can you reproduce the issue?
— Reply to this email directly or view it on GitHub https://github.com/PointCloudLibrary/pcl/issues/1379#issuecomment-148929183 .
nearly:
you have to change or add a new test with pcl::PointNormal as first two point type template parameter.
My suggestion would be adding following at line 763 before SampleConsensusPrejective: //rename normals to normals_src and normals_trg from feature estimation
pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_source_ptr (new ...); pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_target_ptr (new...);
pcl::concatenateFields (_cloud_source_ptr, normals_src , normalcloud_source_ptr ); pcl::concatenateFields (_cloud_target_ptr, normals_trg, normalcloud_target_ptr); (see http://docs.pointclouds.org/1.0.1/group__io.html#ga3ff8970a2ee68a487a1ffd631f2fc7cd)
SampleConsensusPrerejective<PointNormal, PointNormal, FPFHSignature33> reg;
i only testet it with VS1013x64, do not have a linux system for test
i also only viewed at the final transform over getFinalTransformation() function which differs a lot (no valid rigid transformation matrix).
Hi,
"Unfortunately", I cannot reproduce the error. The test passes just fine, also with PointNormal. Here's my modified test file contents (also here at pastebin: http://pastebin.com/ZJ5QCqPX):
/*
* Software License Agreement (BSD License)
*
* Point Cloud Library (PCL) - www.pointclouds.org
* Copyright (c) 2010-2011, Willow Garage, Inc.
*
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of the copyright holder(s) nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* $Id$
*
*/
#include <gtest/gtest.h>
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/fpfh.h>
#include <pcl/registration/registration.h>
#include <pcl/registration/icp.h>
#include <pcl/registration/joint_icp.h>
#include <pcl/registration/icp_nl.h>
#include <pcl/registration/gicp.h>
#include <pcl/registration/gicp6d.h>
#include <pcl/registration/transformation_estimation_point_to_plane.h>
#include <pcl/registration/transformation_validation_euclidean.h>
#include <pcl/registration/correspondence_rejection_median_distance.h>
#include <pcl/registration/correspondence_rejection_sample_consensus.h>
#include <pcl/registration/correspondence_rejection_surface_normal.h>
#include <pcl/registration/correspondence_estimation_normal_shooting.h>
#include <pcl/registration/ia_ransac.h>
#include <pcl/registration/pyramid_feature_matching.h>
#include <pcl/features/ppf.h>
#include <pcl/registration/ppf_registration.h>
#include <pcl/registration/ndt.h>
#include <pcl/registration/sample_consensus_prerejective.h>
// We need Histogram<2> to function, so we'll explicitely add kdtree_flann.hpp here
#include <pcl/kdtree/impl/kdtree_flann.hpp>
//(pcl::Histogram<2>)
using namespace pcl;
using namespace pcl::io;
using namespace std;
PointCloud<PointXYZ> cloud_source, cloud_target, cloud_reg;
PointCloud<PointXYZRGBA> cloud_with_color;
template <typename PointSource, typename PointTarget>
class RegistrationWrapper : public Registration<PointSource, PointTarget>
{
public:
void computeTransformation (pcl::PointCloud<PointSource> &, const Eigen::Matrix4f&) { }
bool hasValidFeaturesTest ()
{
return (this->hasValidFeatures ());
}
void findFeatureCorrespondencesTest (int index, std::vector<int> &correspondence_indices)
{
this->findFeatureCorrespondences (index, correspondence_indices);
}
};
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, findFeatureCorrespondences)
{
typedef Histogram<2> FeatureT;
typedef PointCloud<FeatureT> FeatureCloud;
RegistrationWrapper <PointXYZ, PointXYZ> reg;
FeatureCloud feature0, feature1, feature2, feature3;
feature0.height = feature1.height = feature2.height = feature3.height = 1;
feature0.is_dense = feature1.is_dense = feature2.is_dense = feature3.is_dense = true;
for (float x = -5.0f; x <= 5.0f; x += 0.2f)
{
for (float y = -5.0f; y <= 5.0f; y += 0.2f)
{
FeatureT f;
f.histogram[0] = x;
f.histogram[1] = y;
feature0.points.push_back (f);
f.histogram[0] = x;
f.histogram[1] = y - 2.5f;
feature1.points.push_back (f);
f.histogram[0] = x - 2.0f;
f.histogram[1] = y + 1.5f;
feature2.points.push_back (f);
f.histogram[0] = x + 2.0f;
f.histogram[1] = y + 1.5f;
feature3.points.push_back (f);
}
}
feature0.width = static_cast<uint32_t> (feature0.points.size ());
feature1.width = static_cast<uint32_t> (feature1.points.size ());
feature2.width = static_cast<uint32_t> (feature2.points.size ());
feature3.width = static_cast<uint32_t> (feature3.points.size ());
KdTreeFLANN<FeatureT> tree;
/* int k = 600;
reg.setSourceFeature<FeatureT> (feature0.makeShared (), "feature1");
reg.setTargetFeature<FeatureT> (feature1.makeShared (), "feature1");
reg.setKSearch<FeatureT> (tree.makeShared (), k, "feature1");
reg.setSourceFeature<FeatureT> (feature0.makeShared (), "feature2");
reg.setTargetFeature<FeatureT> (feature2.makeShared (), "feature2");
reg.setKSearch<FeatureT> (tree.makeShared (), k, "feature2");
reg.setSourceFeature<FeatureT> (feature0.makeShared (), "feature3");
reg.setTargetFeature<FeatureT> (feature3.makeShared (), "feature3");
reg.setKSearch<FeatureT> (tree.makeShared (), k, "feature3");
ASSERT_TRUE (reg.hasValidFeaturesTest ());
std::vector<int> indices;
reg.findFeatureCorrespondencesTest (1300, indices);
ASSERT_EQ ((int)indices.size (), 10);
const int correct_values[] = {1197, 1248, 1249, 1299, 1300, 1301, 1302, 1350, 1351, 1401};
for (size_t i = 0; i < indices.size (); ++i)
{
EXPECT_EQ (indices[i], correct_values[i]);
}
*/
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, IterativeClosestPoint)
{
IterativeClosestPoint<PointXYZ, PointXYZ> reg;
PointCloud<PointXYZ>::ConstPtr source (cloud_source.makeShared ());
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
reg.setInputCloud (source); // test for PCL_DEPRECATED
source = reg.getInputCloud (); // test for PCL_DEPRECATED
#pragma GCC diagnostic pop
reg.setInputSource (source);
reg.setInputTarget (cloud_target.makeShared ());
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
reg.setMaxCorrespondenceDistance (0.05);
// Register
reg.align (cloud_reg);
EXPECT_EQ (int (cloud_reg.points.size ()), int (cloud_source.points.size ()));
//Eigen::Matrix4f transformation = reg.getFinalTransformation ();
// EXPECT_NEAR (transformation (0, 0), 0.8806, 1e-3);
// EXPECT_NEAR (transformation (0, 1), 0.036481287330389023, 1e-2);
// EXPECT_NEAR (transformation (0, 2), -0.4724, 1e-3);
// EXPECT_NEAR (transformation (0, 3), 0.03453, 1e-3);
//
// EXPECT_NEAR (transformation (1, 0), -0.02354, 1e-3);
// EXPECT_NEAR (transformation (1, 1), 0.9992, 1e-3);
// EXPECT_NEAR (transformation (1, 2), 0.03326, 1e-3);
// EXPECT_NEAR (transformation (1, 3), -0.001519, 1e-3);
//
// EXPECT_NEAR (transformation (2, 0), 0.4732, 1e-3);
// EXPECT_NEAR (transformation (2, 1), -0.01817, 1e-3);
// EXPECT_NEAR (transformation (2, 2), 0.8808, 1e-3);
// EXPECT_NEAR (transformation (2, 3), 0.04116, 1e-3);
//
// EXPECT_EQ (transformation (3, 0), 0);
// EXPECT_EQ (transformation (3, 1), 0);
// EXPECT_EQ (transformation (3, 2), 0);
// EXPECT_EQ (transformation (3, 3), 1);
}
////////////////////////////////////////////////////////////////////////////////////////////////////////
void
sampleRandomTransform (Eigen::Affine3f &trans, float max_angle, float max_trans)
{
srand(0);
// Sample random transform
Eigen::Vector3f axis((float)rand() / RAND_MAX, (float)rand() / RAND_MAX, (float)rand() / RAND_MAX);
axis.normalize();
float angle = (float)rand() / RAND_MAX * max_angle;
Eigen::Vector3f translation((float)rand() / RAND_MAX, (float)rand() / RAND_MAX, (float)rand() / RAND_MAX);
translation *= max_trans;
Eigen::Affine3f rotation(Eigen::AngleAxis<float>(angle, axis));
trans = Eigen::Translation3f(translation) * rotation;
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, IterativeClosestPointWithRejectors)
{
IterativeClosestPoint<PointXYZ, PointXYZ> reg;
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
reg.setMaxCorrespondenceDistance (0.15);
// Add a median distance rejector
pcl::registration::CorrespondenceRejectorMedianDistance::Ptr rej_med (new pcl::registration::CorrespondenceRejectorMedianDistance);
rej_med->setMedianFactor (4.0);
reg.addCorrespondenceRejector (rej_med);
// Also add a SaC rejector
pcl::registration::CorrespondenceRejectorSampleConsensus<PointXYZ>::Ptr rej_samp (new pcl::registration::CorrespondenceRejectorSampleConsensus<PointXYZ>);
reg.addCorrespondenceRejector (rej_samp);
size_t ntransforms = 10;
for (size_t t = 0; t < ntransforms; t++)
{
// Sample a fixed offset between cloud pairs
Eigen::Affine3f delta_transform;
sampleRandomTransform (delta_transform, 0., 0.05);
// Sample random global transform for each pair, to make sure we aren't biased around the origin
Eigen::Affine3f net_transform;
sampleRandomTransform (net_transform, 2*M_PI, 10.);
PointCloud<PointXYZ>::ConstPtr source (cloud_source.makeShared ());
PointCloud<PointXYZ>::Ptr source_trans (new PointCloud<PointXYZ>);
PointCloud<PointXYZ>::Ptr target_trans (new PointCloud<PointXYZ>);
pcl::transformPointCloud (*source, *source_trans, delta_transform.inverse () * net_transform);
pcl::transformPointCloud (*source, *target_trans, net_transform);
reg.setInputSource (source_trans);
reg.setInputTarget (target_trans);
// Register
reg.align (cloud_reg);
Eigen::Matrix4f trans_final = reg.getFinalTransformation ();
// Translation should be within 1cm
for (int y = 0; y < 4; y++)
EXPECT_NEAR (trans_final (y, 3), delta_transform (y, 3), 1E-2);
// Rotation within .1
for (int y = 0; y < 4; y++)
for (int x = 0; x < 3; x++)
EXPECT_NEAR (trans_final (y, x), delta_transform (y, x), 1E-1);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, JointIterativeClosestPoint)
{
// Set up
JointIterativeClosestPoint<PointXYZ, PointXYZ> reg;
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
reg.setMaxCorrespondenceDistance (0.25); // Making sure the correspondence distance > the max translation
// Add a median distance rejector
pcl::registration::CorrespondenceRejectorMedianDistance::Ptr rej_med (new pcl::registration::CorrespondenceRejectorMedianDistance);
rej_med->setMedianFactor (4.0);
reg.addCorrespondenceRejector (rej_med);
// Also add a SaC rejector
pcl::registration::CorrespondenceRejectorSampleConsensus<PointXYZ>::Ptr rej_samp (new pcl::registration::CorrespondenceRejectorSampleConsensus<PointXYZ>);
reg.addCorrespondenceRejector (rej_samp);
size_t ntransforms = 10;
for (size_t t = 0; t < ntransforms; t++)
{
// Sample a fixed offset between cloud pairs
Eigen::Affine3f delta_transform;
// No rotation, since at a random offset this could make it converge to a wrong (but still reasonable) result
sampleRandomTransform (delta_transform, 0., 0.10);
// Make a few transformed versions of the data, plus noise
size_t nclouds = 5;
for (size_t i = 0; i < nclouds; i++)
{
PointCloud<PointXYZ>::ConstPtr source (cloud_source.makeShared ());
// Sample random global transform for each pair
Eigen::Affine3f net_transform;
sampleRandomTransform (net_transform, 2*M_PI, 10.);
// And apply it to the source and target
PointCloud<PointXYZ>::Ptr source_trans (new PointCloud<PointXYZ>);
PointCloud<PointXYZ>::Ptr target_trans (new PointCloud<PointXYZ>);
pcl::transformPointCloud (*source, *source_trans, delta_transform.inverse () * net_transform);
pcl::transformPointCloud (*source, *target_trans, net_transform);
// Add these to the joint solver
reg.addInputSource (source_trans);
reg.addInputTarget (target_trans);
}
// Register
reg.align (cloud_reg);
Eigen::Matrix4f trans_final = reg.getFinalTransformation ();
for (int y = 0; y < 4; y++)
for (int x = 0; x < 4; x++)
EXPECT_NEAR (trans_final (y, x), delta_transform (y, x), 1E-2);
EXPECT_TRUE (cloud_reg.empty ()); // By definition, returns an empty cloud
// Clear
reg.clearInputSources ();
reg.clearInputTargets ();
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, IterativeClosestPointNonLinear)
{
typedef PointXYZRGB PointT;
PointCloud<PointT>::Ptr temp_src (new PointCloud<PointT>);
copyPointCloud (cloud_source, *temp_src);
PointCloud<PointT>::Ptr temp_tgt (new PointCloud<PointT>);
copyPointCloud (cloud_target, *temp_tgt);
PointCloud<PointT> output;
IterativeClosestPointNonLinear<PointT, PointT> reg;
reg.setInputSource (temp_src);
reg.setInputTarget (temp_tgt);
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
// We get different results on 32 vs 64-bit systems. To address this, we've removed the explicit output test
// on the transformation matrix. Instead, we're testing to make sure the algorithm converges to a sufficiently
// low error by checking the fitness score.
/*
Eigen::Matrix4f transformation = reg.getFinalTransformation ();
EXPECT_NEAR (transformation (0, 0), 0.941755, 1e-2);
EXPECT_NEAR (transformation (0, 1), 0.147362, 1e-2);
EXPECT_NEAR (transformation (0, 2), -0.281285, 1e-2);
EXPECT_NEAR (transformation (0, 3), 0.029813, 1e-2);
EXPECT_NEAR (transformation (1, 0), -0.111655, 1e-2);
EXPECT_NEAR (transformation (1, 1), 0.990408, 1e-2);
EXPECT_NEAR (transformation (1, 2), 0.139090, 1e-2);
EXPECT_NEAR (transformation (1, 3), -0.001864, 1e-2);
EXPECT_NEAR (transformation (2, 0), 0.297271, 1e-2);
EXPECT_NEAR (transformation (2, 1), -0.092015, 1e-2);
EXPECT_NEAR (transformation (2, 2), 0.939670, 1e-2);
EXPECT_NEAR (transformation (2, 3), 0.042721, 1e-2);
EXPECT_EQ (transformation (3, 0), 0);
EXPECT_EQ (transformation (3, 1), 0);
EXPECT_EQ (transformation (3, 2), 0);
EXPECT_EQ (transformation (3, 3), 1);
*/
EXPECT_LT (reg.getFitnessScore (), 0.001);
// Check again, for all possible caching schemes
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = static_cast<bool> (iter / 2);
bool force_cache_reciprocal = static_cast<bool> (iter % 2);
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (temp_tgt);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (temp_src);
reg.setSearchMethodSource (tree_recip, force_cache_reciprocal);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.001);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, IterativeClosestPoint_PointToPlane)
{
typedef PointNormal PointT;
PointCloud<PointT>::Ptr src (new PointCloud<PointT>);
copyPointCloud (cloud_source, *src);
PointCloud<PointT>::Ptr tgt (new PointCloud<PointT>);
copyPointCloud (cloud_target, *tgt);
PointCloud<PointT> output;
NormalEstimation<PointNormal, PointNormal> norm_est;
norm_est.setSearchMethod (search::KdTree<PointNormal>::Ptr (new search::KdTree<PointNormal>));
norm_est.setKSearch (10);
norm_est.setInputCloud (tgt);
norm_est.compute (*tgt);
IterativeClosestPoint<PointT, PointT> reg;
typedef registration::TransformationEstimationPointToPlane<PointT, PointT> PointToPlane;
boost::shared_ptr<PointToPlane> point_to_plane (new PointToPlane);
reg.setTransformationEstimation (point_to_plane);
reg.setInputSource (src);
reg.setInputTarget (tgt);
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
// Use a correspondence estimator which needs normals
registration::CorrespondenceEstimationNormalShooting<PointT, PointT, PointT>::Ptr ce (new registration::CorrespondenceEstimationNormalShooting<PointT, PointT, PointT>);
reg.setCorrespondenceEstimation (ce);
// Add rejector
registration::CorrespondenceRejectorSurfaceNormal::Ptr rej (new registration::CorrespondenceRejectorSurfaceNormal);
rej->setThreshold (0); //Could be a lot of rotation -- just make sure they're at least within 0 degrees
reg.addCorrespondenceRejector (rej);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.005);
// Check again, for all possible caching schemes
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (tgt);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (src);
reg.setSearchMethodSource (tree_recip, force_cache_reciprocal);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.005);
}
// We get different results on 32 vs 64-bit systems. To address this, we've removed the explicit output test
// on the transformation matrix. Instead, we're testing to make sure the algorithm converges to a sufficiently
// low error by checking the fitness score.
/*
Eigen::Matrix4f transformation = reg.getFinalTransformation ();
EXPECT_NEAR (transformation (0, 0), 0.9046, 1e-2);
EXPECT_NEAR (transformation (0, 1), 0.0609, 1e-2);
EXPECT_NEAR (transformation (0, 2), -0.4219, 1e-2);
EXPECT_NEAR (transformation (0, 3), 0.0327, 1e-2);
EXPECT_NEAR (transformation (1, 0), -0.0328, 1e-2);
EXPECT_NEAR (transformation (1, 1), 0.9968, 1e-2);
EXPECT_NEAR (transformation (1, 2), 0.0851, 1e-2);
EXPECT_NEAR (transformation (1, 3), -0.0003, 1e-2);
EXPECT_NEAR (transformation (2, 0), 0.4250, 1e-2);
EXPECT_NEAR (transformation (2, 1), -0.0641, 1e-2);
EXPECT_NEAR (transformation (2, 2), 0.9037, 1e-2);
EXPECT_NEAR (transformation (2, 3), 0.0413, 1e-2);
EXPECT_EQ (transformation (3, 0), 0);
EXPECT_EQ (transformation (3, 1), 0);
EXPECT_EQ (transformation (3, 2), 0);
EXPECT_EQ (transformation (3, 3), 1);
*/
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, GeneralizedIterativeClosestPoint)
{
typedef PointXYZ PointT;
PointCloud<PointT>::Ptr src (new PointCloud<PointT>);
copyPointCloud (cloud_source, *src);
PointCloud<PointT>::Ptr tgt (new PointCloud<PointT>);
copyPointCloud (cloud_target, *tgt);
PointCloud<PointT> output;
GeneralizedIterativeClosestPoint<PointT, PointT> reg;
reg.setInputSource (src);
reg.setInputTarget (tgt);
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.001);
// Check again, for all possible caching schemes
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (tgt);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (src);
reg.setSearchMethodSource (tree_recip, force_cache_reciprocal);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.001);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, GeneralizedIterativeClosestPoint6D)
{
typedef PointXYZRGBA PointT;
Eigen::Affine3f delta_transform;
PointCloud<PointT>::Ptr src_full (new PointCloud<PointT>);
copyPointCloud (cloud_with_color, *src_full);
PointCloud<PointT>::Ptr tgt_full (new PointCloud<PointT>);
sampleRandomTransform (delta_transform, M_PI/0.1, .03);
pcl::transformPointCloud (cloud_with_color, *tgt_full, delta_transform);
PointCloud<PointT> output;
// VoxelGrid filter
PointCloud<PointT>::Ptr src (new PointCloud<PointT>);
PointCloud<PointT>::Ptr tgt (new PointCloud<PointT>);
pcl::VoxelGrid<PointT> sor;
sor.setLeafSize (0.02f, 0.02f, 0.02f);
sor.setInputCloud (src_full);
sor.filter (*src);
sor.setInputCloud (tgt_full);
sor.filter (*tgt);
GeneralizedIterativeClosestPoint6D reg;
reg.setInputSource (src);
reg.setInputTarget (tgt);
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (src->points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.003);
// Check again, for all possible caching schemes
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (tgt);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (src);
reg.setSearchMethodSource (tree_recip, force_cache_reciprocal);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (src->points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.003);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, NormalDistributionsTransform)
{
typedef PointNormal PointT;
PointCloud<PointT>::Ptr src (new PointCloud<PointT>);
copyPointCloud (cloud_source, *src);
PointCloud<PointT>::Ptr tgt (new PointCloud<PointT>);
copyPointCloud (cloud_target, *tgt);
PointCloud<PointT> output;
NormalDistributionsTransform<PointT, PointT> reg;
reg.setStepSize (0.05);
reg.setResolution (0.025f);
reg.setInputSource (src);
reg.setInputTarget (tgt);
reg.setMaximumIterations (50);
reg.setTransformationEpsilon (1e-8);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.001);
// Check again, for all possible caching schemes
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (tgt);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (src);
reg.setSearchMethodSource (tree_recip, force_cache_reciprocal);
// Register
reg.align (output);
EXPECT_EQ (int (output.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.001);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, SampleConsensusInitialAlignment)
{
// Transform the source cloud by a large amount
Eigen::Vector3f initial_offset (100, 0, 0);
float angle = static_cast<float> (M_PI) / 2.0f;
Eigen::Quaternionf initial_rotation (cos (angle / 2), 0, 0, sin (angle / 2));
PointCloud<PointXYZ> cloud_source_transformed;
transformPointCloud (cloud_source, cloud_source_transformed, initial_offset, initial_rotation);
// Create shared pointers
PointCloud<PointXYZ>::Ptr cloud_source_ptr, cloud_target_ptr;
cloud_source_ptr = cloud_source_transformed.makeShared ();
cloud_target_ptr = cloud_target.makeShared ();
// Initialize estimators for surface normals and FPFH features
search::KdTree<PointXYZ>::Ptr tree (new search::KdTree<PointXYZ>);
NormalEstimation<PointXYZ, Normal> norm_est;
norm_est.setSearchMethod (tree);
norm_est.setRadiusSearch (0.05);
PointCloud<Normal> normals;
FPFHEstimation<PointXYZ, Normal, FPFHSignature33> fpfh_est;
fpfh_est.setSearchMethod (tree);
fpfh_est.setRadiusSearch (0.05);
PointCloud<FPFHSignature33> features_source, features_target;
// Estimate the FPFH features for the source cloud
norm_est.setInputCloud (cloud_source_ptr);
norm_est.compute (normals);
fpfh_est.setInputCloud (cloud_source_ptr);
fpfh_est.setInputNormals (normals.makeShared ());
fpfh_est.compute (features_source);
// Estimate the FPFH features for the target cloud
norm_est.setInputCloud (cloud_target_ptr);
norm_est.compute (normals);
fpfh_est.setInputCloud (cloud_target_ptr);
fpfh_est.setInputNormals (normals.makeShared ());
fpfh_est.compute (features_target);
// Initialize Sample Consensus Initial Alignment (SAC-IA)
SampleConsensusInitialAlignment<PointXYZ, PointXYZ, FPFHSignature33> reg;
reg.setMinSampleDistance (0.05f);
reg.setMaxCorrespondenceDistance (0.1);
reg.setMaximumIterations (1000);
reg.setInputSource (cloud_source_ptr);
reg.setInputTarget (cloud_target_ptr);
reg.setSourceFeatures (features_source.makeShared ());
reg.setTargetFeatures (features_target.makeShared ());
// Register
reg.align (cloud_reg);
EXPECT_EQ (int (cloud_reg.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.0005);
// Check again, for all possible caching schemes
typedef pcl::PointXYZ PointT;
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud (cloud_target_ptr);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud (cloud_source_ptr);
reg.setSearchMethodSource(tree_recip, force_cache_reciprocal);
// Register
reg.align (cloud_reg);
EXPECT_EQ (int (cloud_reg.points.size ()), int (cloud_source.points.size ()));
EXPECT_LT (reg.getFitnessScore (), 0.0005);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, SampleConsensusPrerejective)
{
/*
* This test is a near-exact copy of the SampleConsensusInitialAlignment test,
* with the only modifications that:
* 1) the number of iterations is increased 1000 --> 5000
* 2) the feature correspondence randomness (the number of kNNs) is decreased 10 --> 2
*/
// Transform the source cloud by a large amount
Eigen::Vector3f initial_offset (100, 0, 0);
float angle = static_cast<float> (M_PI) / 2.0f;
Eigen::Quaternionf initial_rotation (cos (angle / 2), 0, 0, sin (angle / 2));
PointCloud<PointXYZ> cloud_source_transformed;
transformPointCloud (cloud_source, cloud_source_transformed, initial_offset, initial_rotation);
// Create shared pointers
PointCloud<PointXYZ>::Ptr cloud_source_ptr, cloud_target_ptr;
cloud_source_ptr = cloud_source_transformed.makeShared ();
cloud_target_ptr = cloud_target.makeShared ();
// Initialize estimators for surface normals and FPFH features
search::KdTree<PointXYZ>::Ptr tree (new search::KdTree<PointXYZ>);
// Normal estimator
NormalEstimation<PointXYZ, Normal> norm_est;
norm_est.setSearchMethod (tree);
norm_est.setRadiusSearch (0.005);
//PointCloud<Normal> normals;
PointCloud<Normal> normals_src, normals_trg;
// FPFH estimator
FPFHEstimation<PointXYZ, Normal, FPFHSignature33> fpfh_est;
fpfh_est.setSearchMethod (tree);
fpfh_est.setRadiusSearch (0.05);
PointCloud<FPFHSignature33> features_source, features_target;
// Estimate the normals and the FPFH features for the source cloud
norm_est.setInputCloud (cloud_source_ptr);
//norm_est.compute (normals);
norm_est.compute (normals_src);
fpfh_est.setInputCloud (cloud_source_ptr);
//fpfh_est.setInputNormals (normals.makeShared ());
fpfh_est.setInputNormals (normals_src.makeShared ());
fpfh_est.compute (features_source);
// Estimate the normals and the FPFH features for the target cloud
norm_est.setInputCloud (cloud_target_ptr);
//norm_est.compute (normals);
norm_est.compute (normals_trg);
fpfh_est.setInputCloud (cloud_target_ptr);
//fpfh_est.setInputNormals (normals.makeShared ());
fpfh_est.setInputNormals (normals_trg.makeShared ());
fpfh_est.compute (features_target);
pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_source_ptr (new pcl::PointCloud<pcl::PointNormal>);
pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_target_ptr (new pcl::PointCloud<pcl::PointNormal>);
pcl::concatenateFields (*cloud_source_ptr, normals_src , *normalcloud_source_ptr );
pcl::concatenateFields (*cloud_target_ptr, normals_trg, *normalcloud_target_ptr);
// Initialize Sample Consensus Prerejective with 5x the number of iterations and 1/5 feature kNNs as SAC-IA
//SampleConsensusPrerejective<PointXYZ, PointXYZ, FPFHSignature33> reg;
SampleConsensusPrerejective<PointNormal, PointNormal, FPFHSignature33> reg;
reg.setMaxCorrespondenceDistance (0.1);
reg.setMaximumIterations (5000);
reg.setSimilarityThreshold (0.6f);
reg.setCorrespondenceRandomness (2);
// Set source and target cloud/features
//reg.setInputSource (cloud_source_ptr);
//reg.setInputTarget (cloud_target_ptr);
reg.setInputSource (normalcloud_source_ptr);
reg.setInputTarget (normalcloud_target_ptr);
reg.setSourceFeatures (features_source.makeShared ());
reg.setTargetFeatures (features_target.makeShared ());
// Register
pcl::PointCloud<pcl::PointNormal> cloud_reg;
reg.align (cloud_reg);
// Check output consistency and quality of alignment
EXPECT_EQ (static_cast<int> (cloud_reg.points.size ()), static_cast<int> (cloud_source.points.size ()));
float inlier_fraction = static_cast<float> (reg.getInliers ().size ()) / static_cast<float> (cloud_source.points.size ());
EXPECT_GT (inlier_fraction, 0.95f);
// Check again, for all possible caching schemes
//typedef pcl::PointXYZ PointT;
typedef pcl::PointNormal PointT;
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool) iter/2;
bool force_cache_reciprocal = (bool) iter%2;
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
//tree->setInputCloud (cloud_target_ptr);
tree->setInputCloud (normalcloud_target_ptr);
reg.setSearchMethodTarget (tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip (new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
//tree_recip->setInputCloud (cloud_source_ptr);
tree_recip->setInputCloud (normalcloud_source_ptr);
reg.setSearchMethodSource(tree_recip, force_cache_reciprocal);
// Register
reg.align (cloud_reg);
// Check output consistency and quality of alignment
EXPECT_EQ (int (cloud_reg.points.size ()), int (cloud_source.points.size ()));
inlier_fraction = static_cast<float> (reg.getInliers ().size ()) / static_cast<float> (cloud_source.points.size ());
EXPECT_GT (inlier_fraction, 0.95f);
}
}
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, PyramidFeatureHistogram)
{
// Create shared pointers
PointCloud<PointXYZ>::Ptr cloud_source_ptr = cloud_source.makeShared (),
cloud_target_ptr = cloud_target.makeShared ();
PointCloud<Normal>::Ptr cloud_source_normals (new PointCloud<Normal> ()),
cloud_target_normals (new PointCloud<Normal> ());
search::KdTree<PointXYZ>::Ptr tree (new search::KdTree<PointXYZ>);
NormalEstimation<PointXYZ, Normal> normal_estimator;
normal_estimator.setSearchMethod (tree);
normal_estimator.setRadiusSearch (0.05);
normal_estimator.setInputCloud (cloud_source_ptr);
normal_estimator.compute (*cloud_source_normals);
normal_estimator.setInputCloud (cloud_target_ptr);
normal_estimator.compute (*cloud_target_normals);
PointCloud<PPFSignature>::Ptr ppf_signature_source (new PointCloud<PPFSignature> ()),
ppf_signature_target (new PointCloud<PPFSignature> ());
PPFEstimation<PointXYZ, Normal, PPFSignature> ppf_estimator;
ppf_estimator.setInputCloud (cloud_source_ptr);
ppf_estimator.setInputNormals (cloud_source_normals);
ppf_estimator.compute (*ppf_signature_source);
ppf_estimator.setInputCloud (cloud_target_ptr);
ppf_estimator.setInputNormals (cloud_target_normals);
ppf_estimator.compute (*ppf_signature_target);
vector<pair<float, float> > dim_range_input, dim_range_target;
for (size_t i = 0; i < 3; ++i) dim_range_input.push_back (pair<float, float> (static_cast<float> (-M_PI), static_cast<float> (M_PI)));
dim_range_input.push_back (pair<float, float> (0.0f, 1.0f));
for (size_t i = 0; i < 3; ++i) dim_range_target.push_back (pair<float, float> (static_cast<float> (-M_PI) * 10.0f, static_cast<float> (M_PI) * 10.0f));
dim_range_target.push_back (pair<float, float> (0.0f, 50.0f));
PyramidFeatureHistogram<PPFSignature>::Ptr pyramid_source (new PyramidFeatureHistogram<PPFSignature> ()),
pyramid_target (new PyramidFeatureHistogram<PPFSignature> ());
pyramid_source->setInputCloud (ppf_signature_source);
pyramid_source->setInputDimensionRange (dim_range_input);
pyramid_source->setTargetDimensionRange (dim_range_target);
pyramid_source->compute ();
pyramid_target->setInputCloud (ppf_signature_target);
pyramid_target->setInputDimensionRange (dim_range_input);
pyramid_target->setTargetDimensionRange (dim_range_target);
pyramid_target->compute ();
float similarity_value = PyramidFeatureHistogram<PPFSignature>::comparePyramidFeatureHistograms (pyramid_source, pyramid_target);
EXPECT_NEAR (similarity_value, 0.74101555347442627, 1e-4);
vector<pair<float, float> > dim_range_target2;
for (size_t i = 0; i < 3; ++i) dim_range_target2.push_back (pair<float, float> (static_cast<float> (-M_PI) * 5.0f, static_cast<float> (M_PI) * 5.0f));
dim_range_target2.push_back (pair<float, float> (0.0f, 20.0f));
pyramid_source->setTargetDimensionRange (dim_range_target2);
pyramid_source->compute ();
pyramid_target->setTargetDimensionRange (dim_range_target2);
pyramid_target->compute ();
float similarity_value2 = PyramidFeatureHistogram<PPFSignature>::comparePyramidFeatureHistograms (pyramid_source, pyramid_target);
EXPECT_NEAR (similarity_value2, 0.80097091197967529, 1e-4);
vector<pair<float, float> > dim_range_target3;
for (size_t i = 0; i < 3; ++i) dim_range_target3.push_back (pair<float, float> (static_cast<float> (-M_PI) * 2.0f, static_cast<float> (M_PI) * 2.0f));
dim_range_target3.push_back (pair<float, float> (0.0f, 10.0f));
pyramid_source->setTargetDimensionRange (dim_range_target3);
pyramid_source->compute ();
pyramid_target->setTargetDimensionRange (dim_range_target3);
pyramid_target->compute ();
float similarity_value3 = PyramidFeatureHistogram<PPFSignature>::comparePyramidFeatureHistograms (pyramid_source, pyramid_target);
EXPECT_NEAR (similarity_value3, 0.87623238563537598, 1e-3);
}
// Suat G: disabled, since the transformation does not look correct.
// ToDo: update transformation from the ground truth.
#if 0
//////////////////////////////////////////////////////////////////////////////////////////////////////////////////
TEST (PCL, PPFRegistration)
{
// Transform the source cloud by a large amount
Eigen::Vector3f initial_offset (100, 0, 0);
float angle = M_PI/6;
Eigen::Quaternionf initial_rotation (cos (angle / 2), 0, 0, sin (angle / 2));
PointCloud<PointXYZ> cloud_source_transformed;
transformPointCloud (cloud_source, cloud_source_transformed, initial_offset, initial_rotation);
// Create shared pointers
PointCloud<PointXYZ>::Ptr cloud_source_ptr, cloud_target_ptr, cloud_source_transformed_ptr;
cloud_source_transformed_ptr = cloud_source_transformed.makeShared ();
cloud_target_ptr = cloud_target.makeShared ();
// Estimate normals for both clouds
NormalEstimation<PointXYZ, Normal> normal_estimation;
search::KdTree<PointXYZ>::Ptr search_tree (new search::KdTree<PointXYZ> ());
normal_estimation.setSearchMethod (search_tree);
normal_estimation.setRadiusSearch (0.05);
PointCloud<Normal>::Ptr normals_target (new PointCloud<Normal> ()),
normals_source_transformed (new PointCloud<Normal> ());
normal_estimation.setInputCloud (cloud_target_ptr);
normal_estimation.compute (*normals_target);
normal_estimation.setInputCloud (cloud_source_transformed_ptr);
normal_estimation.compute (*normals_source_transformed);
PointCloud<PointNormal>::Ptr cloud_target_with_normals (new PointCloud<PointNormal> ()),
cloud_source_transformed_with_normals (new PointCloud<PointNormal> ());
concatenateFields (*cloud_target_ptr, *normals_target, *cloud_target_with_normals);
concatenateFields (*cloud_source_transformed_ptr, *normals_source_transformed, *cloud_source_transformed_with_normals);
// Compute PPFSignature feature clouds for source cloud
PPFEstimation<PointXYZ, Normal, PPFSignature> ppf_estimator;
PointCloud<PPFSignature>::Ptr features_source_transformed (new PointCloud<PPFSignature> ());
ppf_estimator.setInputCloud (cloud_source_transformed_ptr);
ppf_estimator.setInputNormals (normals_source_transformed);
ppf_estimator.compute (*features_source_transformed);
// Train the source cloud - create the hash map search structure
PPFHashMapSearch::Ptr hash_map_search (new PPFHashMapSearch (15.0 / 180 * M_PI,
0.05));
hash_map_search->setInputFeatureCloud (features_source_transformed);
// Finally, do the registration
PPFRegistration<PointNormal, PointNormal> ppf_registration;
ppf_registration.setSceneReferencePointSamplingRate (20);
ppf_registration.setPositionClusteringThreshold (0.15);
ppf_registration.setRotationClusteringThreshold (45.0 / 180 * M_PI);
ppf_registration.setSearchMethod (hash_map_search);
ppf_registration.setInputCloud (cloud_source_transformed_with_normals);
ppf_registration.setInputTarget (cloud_target_with_normals);
PointCloud<PointNormal> cloud_output;
ppf_registration.align (cloud_output);
Eigen::Matrix4f transformation = ppf_registration.getFinalTransformation ();
EXPECT_NEAR (transformation(0, 0), -0.153768, 1e-4);
EXPECT_NEAR (transformation(0, 1), -0.628136, 1e-4);
EXPECT_NEAR (transformation(0, 2), 0.762759, 1e-4);
EXPECT_NEAR (transformation(0, 3), 15.472, 1e-4);
EXPECT_NEAR (transformation(1, 0), 0.967397, 1e-4);
EXPECT_NEAR (transformation(1, 1), -0.252918, 1e-4);
EXPECT_NEAR (transformation(1, 2), -0.0132578, 1e-4);
EXPECT_NEAR (transformation(1, 3), -96.6221, 1e-4);
EXPECT_NEAR (transformation(2, 0), 0.201243, 1e-4);
EXPECT_NEAR (transformation(2, 1), 0.735852, 1e-4);
EXPECT_NEAR (transformation(2, 2), 0.646547, 1e-4);
EXPECT_NEAR (transformation(2, 3), -20.134, 1e-4);
EXPECT_NEAR (transformation(3, 0), 0.000000, 1e-4);
EXPECT_NEAR (transformation(3, 1), 0.000000, 1e-4);
EXPECT_NEAR (transformation(3, 2), 0.000000, 1e-4);
EXPECT_NEAR (transformation(3, 3), 1.000000, 1e-4);
}
#endif
/* ---[ */
int
main (int argc, char** argv)
{
if (argc < 4)
{
std::cerr << "No test files given. Please download `bun0.pcd`, `bun4.pcd` and `milk_color.pcd` pass their path to the test." << std::endl;
return (-1);
}
// Input
if (loadPCDFile (argv[1], cloud_source) < 0)
{
std::cerr << "Failed to read test file. Please download `bun0.pcd` and pass its path to the test." << std::endl;
return (-1);
}
if (loadPCDFile (argv[2], cloud_target) < 0)
{
std::cerr << "Failed to read test file. Please download `bun4.pcd` and pass its path to the test." << std::endl;
return (-1);
}
if (loadPCDFile (argv[3], cloud_with_color) < 0)
{
std::cerr << "Failed to read test file. Please download `milk_color.pcd` and pass its path to the test." << std::endl;
return (-1);
}
testing::InitGoogleTest (&argc, argv);
return (RUN_ALL_TESTS ());
// Tranpose the cloud_model
/*for (size_t i = 0; i < cloud_model.points.size (); ++i)
{
// cloud_model.points[i].z += 1;
}*/
}
/* ]--- */
And here's my test output:
<HIDDEN>/pcl/build$ test/registration/test_registration ../test/bun0.pcd ../test/bun4.pcd ../test/milk_color.pcd
[==========] Running 12 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 12 tests from PCL
[ RUN ] PCL.findFeatureCorrespondences
[ OK ] PCL.findFeatureCorrespondences (0 ms)
[ RUN ] PCL.IterativeClosestPoint
[ OK ] PCL.IterativeClosestPoint (5 ms)
[ RUN ] PCL.IterativeClosestPointWithRejectors
[ OK ] PCL.IterativeClosestPointWithRejectors (117 ms)
[ RUN ] PCL.JointIterativeClosestPoint
[ OK ] PCL.JointIterativeClosestPoint (895 ms)
[ RUN ] PCL.IterativeClosestPointNonLinear
[ OK ] PCL.IterativeClosestPointNonLinear (90 ms)
[ RUN ] PCL.IterativeClosestPoint_PointToPlane
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[ OK ] PCL.IterativeClosestPoint_PointToPlane (47 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint
[ OK ] PCL.GeneralizedIterativeClosestPoint (111 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint6D
[ OK ] PCL.GeneralizedIterativeClosestPoint6D (30 ms)
[ RUN ] PCL.NormalDistributionsTransform
[ OK ] PCL.NormalDistributionsTransform (214 ms)
[ RUN ] PCL.SampleConsensusInitialAlignment
[ OK ] PCL.SampleConsensusInitialAlignment (2432 ms)
[ RUN ] PCL.SampleConsensusPrerejective
[ OK ] PCL.SampleConsensusPrerejective (1781 ms)
[ RUN ] PCL.PyramidFeatureHistogram
[ OK ] PCL.PyramidFeatureHistogram (810 ms)
[----------] 12 tests from PCL (6532 ms total)
[----------] Global test environment tear-down
[==========] 12 tests from 1 test case ran. (6532 ms total)
[ PASSED ] 12 tests.
Finally, I should note that I compiled PCL in Release mode, otherwise (using RelWithDebInfo) GCC eats my memory like a nerd with a Mars bar.
i also only compile in Release...
Could you test it with enabled PCL_ONLY_CORE_POINT_TYPES cmake flag?
I'm on it... So while I wait for this - what is your test output?
i am afraid that i do not have GTest for enabling pcl tests yet.
I do not develop pcl, i only use it... we internally use cppunit as test framework... i only used a mimalistic void main function based on the test and two test data sets of my own with a std::cout of the final transform....
OK, I see.
Anyways, the test also passes on my side with PCL_ONLY_CORE_POINT_TYPES set to ON:
[==========] Running 12 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 12 tests from PCL
[ RUN ] PCL.findFeatureCorrespondences
[ OK ] PCL.findFeatureCorrespondences (1 ms)
[ RUN ] PCL.IterativeClosestPoint
[ OK ] PCL.IterativeClosestPoint (5 ms)
[ RUN ] PCL.IterativeClosestPointWithRejectors
[ OK ] PCL.IterativeClosestPointWithRejectors (116 ms)
[ RUN ] PCL.JointIterativeClosestPoint
[ OK ] PCL.JointIterativeClosestPoint (899 ms)
[ RUN ] PCL.IterativeClosestPointNonLinear
[ OK ] PCL.IterativeClosestPointNonLinear (90 ms)
[ RUN ] PCL.IterativeClosestPoint_PointToPlane
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[ OK ] PCL.IterativeClosestPoint_PointToPlane (47 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint
[ OK ] PCL.GeneralizedIterativeClosestPoint (107 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint6D
[ OK ] PCL.GeneralizedIterativeClosestPoint6D (31 ms)
[ RUN ] PCL.NormalDistributionsTransform
[ OK ] PCL.NormalDistributionsTransform (215 ms)
[ RUN ] PCL.SampleConsensusInitialAlignment
[ OK ] PCL.SampleConsensusInitialAlignment (2401 ms)
[ RUN ] PCL.SampleConsensusPrerejective
[ OK ] PCL.SampleConsensusPrerejective (1838 ms)
[ RUN ] PCL.PyramidFeatureHistogram
[ OK ] PCL.PyramidFeatureHistogram (797 ms)
[----------] 12 tests from PCL (6547 ms total)
[----------] Global test environment tear-down
[==========] 12 tests from 1 test case ran. (6547 ms total)
[ PASSED ] 12 tests.
could you add an std::cout align.getFinalTransform() print? since i am only using the transform (and not the output aligned cloud) this is the only difference i see (expect compiler)
Done. Here's the modified code: http://pastebin.com/1tKhv7cb, and here's the output:
[==========] Running 12 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 12 tests from PCL
[ RUN ] PCL.findFeatureCorrespondences
[ OK ] PCL.findFeatureCorrespondences (1 ms)
[ RUN ] PCL.IterativeClosestPoint
[ OK ] PCL.IterativeClosestPoint (5 ms)
[ RUN ] PCL.IterativeClosestPointWithRejectors
[ OK ] PCL.IterativeClosestPointWithRejectors (116 ms)
[ RUN ] PCL.JointIterativeClosestPoint
[ OK ] PCL.JointIterativeClosestPoint (897 ms)
[ RUN ] PCL.IterativeClosestPointNonLinear
[ OK ] PCL.IterativeClosestPointNonLinear (88 ms)
[ RUN ] PCL.IterativeClosestPoint_PointToPlane
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
[ OK ] PCL.IterativeClosestPoint_PointToPlane (50 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint
[ OK ] PCL.GeneralizedIterativeClosestPoint (107 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint6D
[ OK ] PCL.GeneralizedIterativeClosestPoint6D (30 ms)
[ RUN ] PCL.NormalDistributionsTransform
[ OK ] PCL.NormalDistributionsTransform (216 ms)
[ RUN ] PCL.SampleConsensusInitialAlignment
[ OK ] PCL.SampleConsensusInitialAlignment (2405 ms)
[ RUN ] PCL.SampleConsensusPrerejective
-0.0046173 0.934217 -0.356678 0.498182
-0.999116 0.0105909 0.0406744 99.9143
0.0417761 0.35655 0.933342 -4.13321
0 0 0 1
[ OK ] PCL.SampleConsensusPrerejective (1759 ms)
[ RUN ] PCL.PyramidFeatureHistogram
[ OK ] PCL.PyramidFeatureHistogram (796 ms)
[----------] 12 tests from PCL (6471 ms total)
[----------] Global test environment tear-down
[==========] 12 tests from 1 test case ran. (6471 ms total)
[ PASSED ] 12 tests.
ok, i am currently rebuilding pcl with gtest, give me some time...
ok here is the original test output:
[==========] Running 12 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 12 tests from PCL
[ RUN ] PCL.findFeatureCorrespondences
[ OK ] PCL.findFeatureCorrespondences (0 ms)
[ RUN ] PCL.IterativeClosestPoint
[ OK ] PCL.IterativeClosestPoint (6 ms)
[ RUN ] PCL.IterativeClosestPointWithRejectors
[ OK ] PCL.IterativeClosestPointWithRejectors (85 ms)
[ RUN ] PCL.JointIterativeClosestPoint
[ OK ] PCL.JointIterativeClosestPoint (617 ms)
[ RUN ] PCL.IterativeClosestPointNonLinear
[ OK ] PCL.IterativeClosestPointNonLinear (79 ms)
[ RUN ] PCL.IterativeClosestPoint_PointToPlane
[pcl::IterativeClosestPointNonLinear::estimateRigidTransformationLM] Need at lea
st 4 points to estimate a transform! Source and target have 3 points![pcl::Itera
tiveClosestPoint::computeTransformation] Not enough correspondences found. Relax
your threshold parameters.
[pcl::IterativeClosestPointNonLinear::estimateRigidTransformationLM] Need at lea
st 4 points to estimate a transform! Source and target have 3 points![pcl::Itera
tiveClosestPoint::computeTransformation] Not enough correspondences found. Relax
your threshold parameters.
unknown file: error: SEH exception with code 0xc0000005 thrown in the test body.
[ FAILED ] PCL.IterativeClosestPoint_PointToPlane (24 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint
[ OK ] PCL.GeneralizedIterativeClosestPoint (49 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint6D
[ OK ] PCL.GeneralizedIterativeClosestPoint6D (74 ms)
[ RUN ] PCL.NormalDistributionsTransform
[ OK ] PCL.NormalDistributionsTransform (332 ms)
[ RUN ] PCL.SampleConsensusInitialAlignment
[ OK ] PCL.SampleConsensusInitialAlignment (2269 ms)
[ RUN ] PCL.SampleConsensusPrerejective
[ OK ] PCL.SampleConsensusPrerejective (2266 ms)
[ RUN ] PCL.PyramidFeatureHistogram
[ OK ] PCL.PyramidFeatureHistogram (2419 ms)
[----------] 12 tests from PCL (8229 ms total)
[----------] Global test environment tear-down
[==========] 12 tests from 1 test case ran. (8230 ms total)
[ PASSED ] 11 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] PCL.IterativeClosestPoint_PointToPlane
1 FAILED TEST
Drücken Sie eine beliebige Taste . . .
and now the output of your modified test:
[==========] Running 12 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 12 tests from PCL
[ RUN ] PCL.findFeatureCorrespondences
[ OK ] PCL.findFeatureCorrespondences (0 ms)
[ RUN ] PCL.IterativeClosestPoint
[ OK ] PCL.IterativeClosestPoint (6 ms)
[ RUN ] PCL.IterativeClosestPointWithRejectors
[ OK ] PCL.IterativeClosestPointWithRejectors (82 ms)
[ RUN ] PCL.JointIterativeClosestPoint
[ OK ] PCL.JointIterativeClosestPoint (621 ms)
[ RUN ] PCL.IterativeClosestPointNonLinear
[ OK ] PCL.IterativeClosestPointNonLinear (76 ms)
[ RUN ] PCL.IterativeClosestPoint_PointToPlane
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences f
ound. Relax your threshold parameters.
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences f
ound. Relax your threshold parameters.
[pcl::IterativeClosestPointNonLinear::estimateRigidTransformationLM] Need at lea
st 4 points to estimate a transform! Source and target have 3 points![pcl::Itera
tiveClosestPoint::computeTransformation] Not enough correspondences found. Relax
your threshold parameters.
E:\Develop\Source\pcl-1.8.1\test\registration\test_registration.cpp(450): error:
Expected: (reg.getFitnessScore()) < (0.005), actual: 0.0163481 vs 0.005
[pcl::IterativeClosestPointNonLinear::estimateRigidTransformationLM] Need at lea
st 4 points to estimate a transform! Source and target have 3 points![ FAILED
] PCL.IterativeClosestPoint_PointToPlane (15 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint
[ OK ] PCL.GeneralizedIterativeClosestPoint (51 ms)
[ RUN ] PCL.GeneralizedIterativeClosestPoint6D
[ OK ] PCL.GeneralizedIterativeClosestPoint6D (64 ms)
[ RUN ] PCL.NormalDistributionsTransform
[ OK ] PCL.NormalDistributionsTransform (336 ms)
[ RUN ] PCL.SampleConsensusInitialAlignment
[ OK ] PCL.SampleConsensusInitialAlignment (2087 ms)
[ RUN ] PCL.SampleConsensusPrerejective
-0.129368 0.92932 -0.345874 12.9617
-0.99005 -0.101575 0.09739 99.0038
0.055374 0.355032 0.933213 -5.49054
0 0 0 1
[ OK ] PCL.SampleConsensusPrerejective (1700 ms)
[ RUN ] PCL.PyramidFeatureHistogram
[ OK ] PCL.PyramidFeatureHistogram (1957 ms)
[----------] 12 tests from PCL (7002 ms total)
[----------] Global test environment tear-down
[==========] 12 tests from 1 test case ran. (7005 ms total)
[ PASSED ] 11 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] PCL.IterativeClosestPoint_PointToPlane
1 FAILED TEST
Drücken Sie eine beliebige Taste . . .
wtf, i do not check it.... on my other code i always get a nasty output when i use PointNormal.... ok i will try reducing my test and post my minimal source....
OK - if I don't hear anything, I'll assume there's nothing more for me to do here. Please remember to close the issue if relevant.
ok, i have done a minimalistic program:
#include <pcl/point_types.h>
#include <pcl/io/pcd_io.h>
#include <pcl/features/normal_3d.h>
#include <pcl/features/fpfh.h>
#include <pcl/registration/registration.h>
#include <pcl/registration/icp.h>
#include <pcl/registration/joint_icp.h>
#include <pcl/registration/icp_nl.h>
#include <pcl/registration/gicp.h>
#include <pcl/registration/gicp6d.h>
#include <pcl/registration/transformation_estimation_point_to_plane.h>
#include <pcl/registration/transformation_validation_euclidean.h>
#include <pcl/registration/correspondence_rejection_median_distance.h>
#include <pcl/registration/correspondence_rejection_sample_consensus.h>
#include <pcl/registration/correspondence_rejection_surface_normal.h>
#include <pcl/registration/correspondence_estimation_normal_shooting.h>
#include <pcl/registration/ia_ransac.h>
#include <pcl/registration/pyramid_feature_matching.h>
#include <pcl/features/ppf.h>
#include <pcl/registration/ppf_registration.h>
#include <pcl/registration/ndt.h>
#include <pcl/registration/sample_consensus_prerejective.h>
#include <pcl/kdtree/impl/kdtree_flann.hpp>
#include <iostream>
using namespace pcl;
using namespace pcl::io;
using namespace std;
PointCloud<PointXYZ> cloud_target, cloud_reg;
PointCloud<PointXYZRGBA> cloud_with_color;
int main( int argc, char** argv )
{
try
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_source_myptr = pcl::PointCloud<pcl::PointXYZ>::Ptr(new pcl::PointCloud<pcl::PointXYZ>() );
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_target_myptr = pcl::PointCloud<pcl::PointXYZ>::Ptr(new pcl::PointCloud<pcl::PointXYZ>());
if (pcl::io::loadPCDFile<pcl::PointXYZ>("E:/Develop/Source/pcl-1.8.1/test/bun0.pcd", *cloud_source_myptr) == -1)
{
std::cout << "error"<<std::endl;
return (-1);
}
if (pcl::io::loadPCDFile<pcl::PointXYZ>("E:/Develop/Source/pcl-1.8.1/test/bun0.pcd", *cloud_target_myptr) == -1) //* load the file
{
std::cout << "error" << std::endl;
return (-1);
}
PointCloud<PointXYZ>& cloud_source = *cloud_source_myptr;
PointCloud<PointXYZ>& cloud_target = *cloud_target_myptr;
// Create shared pointers
PointCloud<PointXYZ>::Ptr cloud_source_ptr, cloud_target_ptr;
cloud_source_ptr = cloud_source.makeShared();
cloud_target_ptr = cloud_target.makeShared();
// Initialize estimators for surface normals and FPFH features
search::KdTree<PointXYZ>::Ptr tree(new search::KdTree<PointXYZ>);
// Normal estimator
NormalEstimation<PointXYZ, Normal> norm_est;
norm_est.setSearchMethod(tree);
norm_est.setRadiusSearch(0.005);
PointCloud<Normal> normals_src, normals_trg;
// FPFH estimator
FPFHEstimation<PointXYZ, Normal, FPFHSignature33> fpfh_est;
fpfh_est.setSearchMethod(tree);
fpfh_est.setRadiusSearch(0.05);
PointCloud<FPFHSignature33> features_source, features_target;
// Estimate the normals and the FPFH features for the source cloud
norm_est.setInputCloud(cloud_source_ptr);
norm_est.compute(normals_src);
fpfh_est.setInputCloud(cloud_source_ptr);
fpfh_est.setInputNormals(normals_src.makeShared());
fpfh_est.compute(features_source);
// Estimate the normals and the FPFH features for the target cloud
norm_est.setInputCloud(cloud_target_ptr);
norm_est.compute(normals_trg);
fpfh_est.setInputCloud(cloud_target_ptr);
fpfh_est.setInputNormals(normals_trg.makeShared());
fpfh_est.compute(features_target);
pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_source_ptr(new pcl::PointCloud<pcl::PointNormal>);
pcl::PointCloud<pcl::PointNormal>::Ptr normalcloud_target_ptr(new pcl::PointCloud<pcl::PointNormal>);
pcl::concatenateFields(*cloud_source_ptr, normals_src, *normalcloud_source_ptr);
pcl::concatenateFields(*cloud_target_ptr, normals_trg, *normalcloud_target_ptr);
// Initialize Sample Consensus Prerejective with 5x the number of iterations and 1/5 feature kNNs as SAC-IA
SampleConsensusPrerejective<PointNormal, PointNormal, FPFHSignature33> reg;
reg.setMaxCorrespondenceDistance(0.1);
reg.setMaximumIterations(5000);
reg.setSimilarityThreshold(0.6f);
reg.setCorrespondenceRandomness(2);
// Set source and target cloud/features
reg.setInputSource(normalcloud_source_ptr);
reg.setInputTarget(normalcloud_target_ptr);
reg.setSourceFeatures(features_source.makeShared());
reg.setTargetFeatures(features_target.makeShared());
// Register
PointCloud<PointNormal> cloud_reg_norm;
reg.align(cloud_reg_norm);
// Check output consistency and quality of alignment
std::cout << "1size" << (static_cast<int> (cloud_reg_norm.points.size()) == static_cast<int> (cloud_source.points.size())) << std::endl;
float inlier_fraction = static_cast<float> (reg.getInliers().size()) / static_cast<float> (cloud_source.points.size());
std::cout << "1frac"<<(inlier_fraction > 0.95f) << std::endl;
// Check again, for all possible caching schemes
typedef pcl::PointXYZ PointT;
for (int iter = 0; iter < 4; iter++)
{
bool force_cache = (bool)iter / 2;
bool force_cache_reciprocal = (bool)iter % 2;
/*
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
// Ensure that, when force_cache is not set, we are robust to the wrong input
if (force_cache)
tree->setInputCloud(cloud_target_ptr);
reg.setSearchMethodTarget(tree, force_cache);
pcl::search::KdTree<PointT>::Ptr tree_recip(new pcl::search::KdTree<PointT>);
if (force_cache_reciprocal)
tree_recip->setInputCloud(cloud_source_ptr);
reg.setSearchMethodSource(tree_recip, force_cache_reciprocal);
*/
// Register
reg.align(cloud_reg_norm);
std::cout << reg.getFinalTransformation() << std::endl << std::endl;
// Check output consistency and quality of alignment
std::cout << "size" << (int(cloud_reg_norm.points.size()) == int(cloud_source.points.size())) << std::endl;
inlier_fraction = static_cast<float> (reg.getInliers().size()) / static_cast<float> (cloud_source.points.size());
std::cout << "frac" << (inlier_fraction>0.95f) << std::endl;
}
}
catch ( const std::string & rE )
{
std::cout << "Exception caugth in main: " << rE << std::endl;
}
catch ( const char* pcC )
{
std::cout << "Exception caugth in main: " << pcC << std::endl;
}
catch ( ... )
{
std::cout << "Exception caugth in main" << std::endl;
}
return 0;
}
and my WRONG output is
1size1
1frac1
1 0 2.98023e-008 0
-4.47035e-008 0 7.45058e-009 0
2.98023e-008 0 1 0
0 0 0 0
size1
frac1
1 0 -2.98023e-008 0
0 0 -5.96046e-008 0
0 0 1 0
0 0 0 0
size1
frac1
1 0 2.98023e-008 0
4.47035e-008 0 1.49012e-008 0
0 0 1 0
0 0 0 0
size1
frac1
1 0 0 0
2.98023e-008 0 -2.98023e-008 0
-1.49012e-008 0 1 0
0 0 0 0
size1
frac1
Drücken Sie eine beliebige Taste . . .
can you explain me the error?
PS: since source and target point clouds are the same the identity transform should be the output.... PS: ignore my pcl1.8.1 naming, it is the trunk of last week
Hmm, this is definitely strange. I would think that this is an instability problem in the transformation estimation class, which probably encounters a degenerate case when you have exactly the same input points in source and target (it applies an SVD on a covariance matrix).
Try to apply the same artificial transformation to the target as is done in the test case.
In any case, it seems to me that TransformationEstimationSVD should be examined, but I don't have time for that...
oh it also happens if target != source (i tested it over setting target to bun4.pcd)
what happens at
pcl::search::KdTree<PointT>::Ptr tree(new pcl::search::KdTree<PointT>);
if we have PointNormals, does they search nearest including "Normal" (which would be wrong)?
No, KdTree only searches in XYZ-space.
Having different source and target is the whole point of registration. This test code is special, in that it uses the same model, which is synthetically transformed - but this allows us to use the same k-d tree for both models.
For your case, you cannot necessarily base your code on the test code. Use something more like the tutorial: http://pointclouds.org/documentation/tutorials/alignment_prerejective.php
i am not sure if i understand you:
I also tested it with two different pointclouds (bun0.pcd with bun4.pcd) whereby i get
1size1
1frac1
0.921267 0 -0.373672 0
0.0995174 0 -0.0416482 0
0.375984 0 0.926625 0
0 0 0 0
size1
frac1
0.915439 0 -0.399527 0
0.0949821 0 0.0974093 0
0.391089 0 0.911532 0
0 0 0 0
size1
frac1
0.948044 0 -0.316758 0
-0.00855055 0 0.0676723 0
0.318025 0 0.946089 0
0 0 0 0
size1
frac1
0.922942 0 -0.258214 0
-0.225376 0 0.238762 0
0.312063 0 0.936119 0
0 0 0 0
size1
frac1
Drücken Sie eine beliebige Taste . . .
which is also wrong (last row is completely zero)... expect the point type and some parameter settings i do not see any difference to the tutorial...
can you reproduce the behaviour? if yes, can you post a bug report if it is in the transormSVD (im am not sure where to do it)?
What I mean is that the code you are using is NOT the tutorial code from the link I gave you - it is a near-copy of the test code, which you cannot use for different models. Have you tried running the tutoral code in the link, using the provided object/scene models?
I cannot post a bug report when I cannot reproduce the problem. I'm sorry but this is on your side until we can confirm that this is specific to my class. If you can provide a minimal code example based on the tutorial where it unexpectedly fails, or if the test code fails (where you still have the synthetic transformation intact), then you can file an issue on the class.
For the TransformationEstimationSVD class, you can just take another minimal example where you were supposed to get the identity transform and put that in the issue.
ok, sorry
i now compiled 1:1 tutorial with provided scene/object and output is still wrong:
(with the only difference in include header using
#include <pcl/features/impl/normal_3d_omp.hpp>
#include <pcl/features/impl/fpfh_omp.hpp>
instead of
#include <pcl/features/normal_3d_omp.h>
#include <pcl/features/fpfh_omp.h>
)
and here is the output
> Loading point clouds...
Failed to find match for field 'curvature'.
Failed to find match for field 'normal_x'.
Failed to find match for field 'normal_y'.
Failed to find match for field 'normal_z'.
Failed to find match for field 'curvature'.
> Downsampling...
> Estimating scene normals...
> Estimating features...
> Starting alignment...
Alignment took 1134ms.
| -0.257 0.000 0.614 |
R = | -0.966 0.000 -0.157 |
| -0.007 0.000 -0.774 |
t = < 0.000, 0.000, 0.000 >
Inliers: 1148/3432
and if i print
std::cout << align.getFinalTransformation() << std::endl;
directly i get
-0.256923 0 0.613905 0
-0.966404 0 -0.157295 0
-0.00738906 0 -0.773549 0
0 0 0 0
Note last line, should be 0 0 0 1 and not 0 0 0 0!
That is not a valid transformation matrix. Also the visualization of the transformed object is FLAT because of that strange matrix (visualisation does not show a registered object onto the scene, blue cloud is 2d after transformation with that strange matrix)....
I did not read this topic but it looks like somewhere in the code someone did this:
Eigen::Vector3d translation;
// ...
Eigen::Matrix3d rotation;
// ...
Eigen::Affine3d my_result;
my_result.translation() = translation;
my_result.linear() = rotation;
Which results in an invalid matrix because the last line was not initialized. (or zero initialized)
Instead of:
Eigen::Vector3d translation;
// ...
Eigen::Matrix3d rotation;
// ...
Eigen::Affine3d my_result (Eigen::Affine3d::Identity());
my_result.translation() = translation;
my_result.linear() = rotation;
sounds interesting, i tried changing align function in tutorial to
align.align(*object_aligned, pcl::SampleConsensusPrerejective<PointNT, PointNT, FeatureT>::Matrix4::Identity());
but without success...
OK, then I guess we can conclude that something in the transformation estimation underneath must be wrong - or somewhere else. @GeraldLodron what is your system configuration in terms of versions dependencies and so on?
I guess we can only trace this error if you are able to somehow test the underlying transformation estimation...
My Settings: Visual Studio 2013 x64 Eigen 3.2.5 Boost 1.58.0 Cuda 7.0.28 Flann 1.8.4 OpenCV 3.0.0 VTK of Paraview 4.4.0 (with QT 5.5.0, OpenGL compiled) QHULL 2012.1.0 TBB 4.2.3
PS: I made a test in my void main by defining
#define EIGEN_INITIALIZE_MATRICES_BY_NAN 1
before all includes, result was the same matrix (with zeroes)
Would it be possible for you to try running TransformationEstimationSVD on a minimal example of the same random point cloud with 3 points? I'm thinking something like this (not verified):
pcl::PointCloud<pcl::PointXYZ> cloud;
for (int i = 0; i < 3; ++i)
cloud.push_back (pcl::PointXYZ (static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX), static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX), static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX));
pcl::registration::TransformationEstimationSVD<pcl::PointXYZ,pcl::PointXYZ> reg;
Eigen::Matrix4f T;
reg.estimateRigidTransformation (cloud, cloud, T);
std::cout << T << std::endl;
sure, the output of above code is:
1 -1.19209e-007 0 1.19209e-007
2.98023e-008 1 0 0
0 1.49012e-008 1 5.96046e-008
0 0 0 1
seems ok
Then I'm lost. My class uses exactly this class like this to generate candidate poses from 3 random point pairs.
@GeraldLodron The only next step I can come up with is for you to copy out the code of my class and debug your way through it until this error occurs, but that is quite some work on code you didn't create.
@VictorLamoine do you have any suggestions how we go on?
as i said it works for PointXYZ, only for PointNormal it wont:
but if i use
pcl::PointCloud<pcl::PointNormal> cloud;
for (int i = 0; i < 3; ++i)
{
cloud.push_back(pcl::PointNormal());
cloud.points.back().x = static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX);
cloud.points.back().y = static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX);
cloud.points.back().z = static_cast<float> (std::rand()) / static_cast<float> (RAND_MAX);
}
pcl::registration::TransformationEstimationSVD<pcl::PointNormal, pcl::PointNormal> reg;
Eigen::Matrix4f T;
reg.estimateRigidTransformation(cloud, cloud, T);
std::cout << T << std::endl;
i also get
1 -1.49012e-008 2.23517e-008 8.9407e-008
-5.96046e-008 1 5.96046e-008 2.38419e-007
2.98023e-008 2.98023e-008 1 8.9407e-008
0 0 0 1
strange...
Precisely - equally strange to me.
ok put some std::cout into sample_consensus_prejejectvie.hpp
the estimateRigidTransform gives a vaild Transformation matrix and before the for loop the final transform is valid identity.
but after
getFitness (inliers, error);
finaltransformation = final_transformation_prev;
the final transform gets strange....
wtf, if i print final_transformation_prev with std::cout after estimateRigidTransformation the whole program works again.... without the std::cout not:
transformation_estimation_->estimateRigidTransformation (*input_, sample_indices, *target_, corresponding_indices, transformation_);
std::cout << "pcl::SampleConsensusPrerejective<PointSource, PointTarget, FeatureT>::computeTransformation transformation_estimation_->estimateRigidTransformation " << transformation_ << std::endl;
// Take a backup of previous result
const Matrix4 final_transformation_prev = final_transformation_;
//THIS LINE UNCOMMENTED LEADS TO A VALID final_transformation_ !!!!!!!!!!!!!WTF
//std::cout << "pcl::SampleConsensusPrerejective<PointSource, PointTarget, FeatureT>::computeTransformation final_transformation_prev " << final_transformation_prev << std::endl;
// Set final result to current transformation
final_transformation_ = transformation_;
// Transform the input and compute the error (uses input_ and final_transformation_)
getFitness (inliers, error);
// Restore previous result
final_transformation_ = final_transformation_prev;
//Here the matrix gets corrupted if we do not print above std::cout
std::cout << "pcl::SampleConsensusPrerejective<PointSource, PointTarget, FeatureT>::computeTransformation final_transformation_ " << final_transformation_ << std::endl;
int a;
std::cin >> a;
This is weird. But I noticed that there's an inconsistent use of Matrix4/Matrix4f. Could you try to change the line in the code to use Eigen::Matrix4f
instead of Matrix4
?
const Eigen::Matrix4f final_transformation_prev = final_transformation_;
does not solved the problem
Could it then be a compiler problem? Meaning the compiler by error optimizes something out? Could you maybe try to recompile your code in Debug mode?
Currently i have some Debug problems so not in next hours easily...
removing const or using copy constructor also did not helped
Hmm, ok. What if you say this:
const Eigen::Matrix4f final_transformation_prev = final_transformation_.eval ();
const Eigen::Matrix4f final_transformation_prev = final_transformation_.eval ();
and
const Matrix4 final_transformation_prev = final_transformation_;
final_transformation_prev = final_transformation_;
whitout success
Hi
I wrote a code 2 years ago where i used pcl::SampleConsensusPrerejective with FPFHSignature where template parameters were like Pcl::SampleConsensusPrerejective <pcl::PointNormal, pcl::PointNormal, pcl::FPFHSignature33 >
I updated pcl 1 year ago (trunk) and my code was since that not working any more (resulting 4x4 transformation matrix was very strange, all zero in last row). Though I now need that code part again and I hoped that I made something wrong in compilation (or there is any bug), so I updated to current trunk yesterday and recompiled pcl again, but same behavior. To search the error I checked out the pcl test “test_registration.hpp”. The only difference in usage I found was that there they used the class like Pcl::SampleConsensusPrerejective <pcl::PointXYZ, pcl:: PointXYZ, pcl::FPFHSignature33 > So I did the same, and surprise, it worked again…
So I am using Visual Studio 2013 x64 and one big change I found 1 year ago was the PCL_ONLY_CORE_POINT_TYPES flag which must be enabled since one year because of some visual studio limitations…
Any suggestions whats running wrong?
Best regards