norlab-ulaval / libpointmatcher

An Iterative Closest Point (ICP) library for 2D and 3D mapping in Robotics
BSD 3-Clause "New" or "Revised" License
1.63k stars 549 forks source link

How is the registration transform represented? #227

Closed nickponline closed 6 years ago

nickponline commented 7 years ago

[EDIT: See second post for a much simpler example]

I am registering two very similar point clouds using the example application and example config.yaml.

/usr/local/bin/pmicp --isTransfoSaved true -v --config config.yaml reference.csv reading.csv

Which outputs

* KDTreeMatcher: initialized with knn=1, epsilon=0, searchType=1 and maxDist=inf
Applying 1 DataPoints filters - 65503 points in
* SamplingSurfaceNormalDataPointsFilter - 32623 points out (-50.1962%)
Applied 1 filters - 32623 points out (-50.1962%)
PointMatcher::icp - reference pre-processing took 0.133289 [s]
Applying 1 DataPoints filters - 68161 points in
* RandomSamplingDataPointsFilter - 33939 points out (-50.2076%)
Applied 1 filters - 33939 points out (-50.2076%)
PointMatcher::icp - reading pre-processing took 0.00785914 [s]
PointMatcher::icp - 11 iterations took 10.2297 [s]
match ratio: 0.900027

The final transform is given as:

   0.999963   0.0017609 -0.00838319    -7138.19
-0.00174143    0.999996  0.00232884     954.812
 0.00838725 -0.00231415    0.999962     4881.22
          0           0           0           1

When I look at the difference between in and out dataset it definitely not translated by that amount. Am I mis-understanding something?

==> test_data_in.csv <==
x,y,z,R,G,B
539715 , 4.06576e+06 , -1.37 , 25443,25443,16705
539715 , 4.06576e+06 , -1.37 , 21074,20046,13107
539714 , 4.06576e+06 , -1.27 , 23644,27756,18504
539716 , 4.06576e+06 , -1.35 , 31611,32639,22616
539714 , 4.06576e+06 , -1.24 , 22102,23901,15163
539719 , 4.06576e+06 , -1.37 , 28013,24929,19275
539721 , 4.06576e+06 , -1.49 , 29041,24929,21331
539720 , 4.06576e+06 , -1.45 , 21074,16191,14649
539722 , 4.06576e+06 , -1.48 , 31354,27499,22102
==> test_data_out.csv <==
x,y,z,R,G,B
539717 , 4.06576e+06 , -2.19436 , 25443,25443,16705
539717 , 4.06576e+06 , -2.19549 , 21074,20046,13107
539716 , 4.06576e+06 , -2.10365 , 23644,27756,18504
539718 , 4.06576e+06 , -2.16581 , 31611,32639,22616
539715 , 4.06576e+06 , -2.07991 , 22102,23901,15163
539720 , 4.06576e+06 , -2.17419 , 28013,24929,19275
539723 , 4.06576e+06 , -2.26834 , 29041,24929,21331
539721 , 4.06576e+06 , -2.24427 , 21074,16191,14649
539723 , 4.06576e+06 , -2.25612 , 31354,27499,22102
==> test_ref.csv <==
x,y,z,R,G,B
539573 , 4.06592e+06 , -1.08 , 44461,46260,53456
539575 , 4.06592e+06 , -0.8 , 44204,46003,53199
539575 , 4.06592e+06 , -0.78 , 43433,45232,52685
539574 , 4.06592e+06 , -1.07 , 44461,46260,53456
539575 , 4.06592e+06 , -1.21 , 44461,47031,54227
539575 , 4.06592e+06 , -1.19 , 44461,47031,53970
539574 , 4.06592e+06 , -1.2 , 44461,46774,53970
539574 , 4.06592e+06 , -1.22 , 44461,46774,54227
539568 , 4.06592e+06 , -0.87 , 16191,19018,14135

Here is my reference: https://www.dropbox.com/s/ocf1u1ybuqpj8y8/reference.csv?dl=0

and my reading: https://www.dropbox.com/s/2s9wewixlo7e8q2/reading.csv?dl=0

nickponline commented 7 years ago

Here is a simple dataset with 10 3D points and a second which is just the first with the x coordinate offset by 1.0:

reference.csv
0.90821682  0.17055668  0.43373967
0.12891558  0.02148021  0.13497946
0.16901483  0.98805444  0.04437725
0.80122159  0.32292443  0.14591973
0.88935897  0.98422458  0.34453548
0.66926361  0.46065528  0.33612961
0.68486366  0.65538499  0.84244886
0.72579089  0.41827523  0.57473447
0.89471663  0.86449667  0.94694097
0.51761563  0.37865354  0.03339747
reading.csv
1.90821682  0.17055668  0.43373967
1.12891558  0.02148021  0.13497946
1.16901483  0.98805444  0.04437725
1.80122159  0.32292443  0.14591973
1.88935897  0.98422458  0.34453548
1.66926361  0.46065528  0.33612961
1.68486366  0.65538499  0.84244886
1.72579089  0.41827523  0.57473447
1.89471663  0.86449667  0.94694097
1.51761563  0.37865354  0.03339747

Final transformation:

 0.278974 -0.528653  0.801686   3374.46
 0.410202  0.820435  0.398273   484.883
-0.868279  0.217745  0.445734   4457.86
        0         0         0         1

Which seems really weird, I get the same output using:

./icp_simple reference.csv reading.csv

Thanks so much for the help, I really appreciate it. Just feel I'm mis-understanding something, but I've read the docs a number of times now :)

pomerlef commented 7 years ago

Be careful when you produce small examples. ICP is like doing a robust regression for line fitting. If you remove too many points, outlier filters will remove good points.

Here is your example with a functional configuration file: result

readingDataPointsFilters:

referenceDataPointsFilters:

matcher:
  KDTreeMatcher:
    knn: 1
    epsilon: 0 

outlierFilters:

errorMinimizer:
  PointToPointErrorMinimizer

transformationCheckers:
  - CounterTransformationChecker:
      maxIterationCount: 40
  - DifferentialTransformationChecker:
      minDiffRotErr: 0.01
      minDiffTransErr: 0.01
      smoothLength: 4   

#inspector:
#  NullInspector

inspector:
 VTKFileInspector:
     baseFileName: pointmatcher-run1
     dumpPerfOnExit: 0
     dumpStats: 0
     dumpIterationInfo: 1 
     dumpDataLinks: 1
     dumpReading: 1
     dumpReference: 1

logger:
  NullLogger
#  FileLogger

This config won't work with more complex point clouds.

nickponline commented 7 years ago

That makes sense for the small (too simple example), I'm still curious what is going wrong with the CSV in the first post, where the transformation is also very drastic. The two clouds should only be off by a couple of meter but instead the translation magnitude of almost 9000.

On Tue, Oct 17, 2017 at 10:32 AM, François Pomerleau < notifications@github.com> wrote:

Be careful when you produce small examples. ICP is like doing a robust regression for line fitting. If you remove too many points, outlier filters will remove good points.

Here is your example with a functional configuration file: [image: result] https://user-images.githubusercontent.com/502089/31678109-17747216-b33b-11e7-8311-c581578cb518.gif

readingDataPointsFilters:

referenceDataPointsFilters:

matcher: KDTreeMatcher: knn: 1 epsilon: 0

outlierFilters:

errorMinimizer: PointToPointErrorMinimizer

transformationCheckers:

  • CounterTransformationChecker: maxIterationCount: 40
  • DifferentialTransformationChecker: minDiffRotErr: 0.01 minDiffTransErr: 0.01 smoothLength: 4

inspector:

NullInspector

inspector: VTKFileInspector: baseFileName: pointmatcher-run1 dumpPerfOnExit: 0 dumpStats: 0 dumpIterationInfo: 1 dumpDataLinks: 1 dumpReading: 1 dumpReference: 1

logger: NullLogger

FileLogger

This config won't work with more complex point clouds.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ethz-asl/libpointmatcher/issues/227#issuecomment-337298210, or mute the thread https://github.com/notifications/unsubscribe-auth/AAkBR167iKoDhrD_--StZAtUPROXB6x7ks5stOShgaJpZM4P7RfN .

pomerlef commented 7 years ago

You can check what is a happening for each iteration using Paraview and the VTKFileInspector from the last config.