Closed Ritchizh closed 4 years ago
This is how it performs at iterations:
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
callback = partial(visualize, ax=ax)
reg = DeformableRegistration(**{'X': static, 'Y': ty_affine, 'alpha': 0.1, 'beta':5})
ty, pr = reg.register(callback)
plt.show()
So, it seems to start from a "smaller" point cloud (probably, this is the case mentioned in one of the old issues), and doesn't run enough iterations (?).
You're only showing 3 iterations. Typically these are run for tens, hundreds, or sometimes more iterations. I'd definitely try increasing the number of iterations you are allowing.
Another thing that stands out to me is that your points have additional "information" in their colours. E.g., you mostly have black and white points. The black points should match the black points and the white the white.
You may consider adding a 4th dimension that is coded in some way based on the colours. E.g. all of the points that are white are 0 on this additional dimension and all of the points that are black are 100. Now, when you solve for the registration problem you have let the program know that these black/white points should match one another. You can change the range (0-100 vs 0-1 vs 0-1000) to indicate the importance of black registering to black and white registering to white.
When you ultimately apply your registration, you can just ignore the 4th dimension (colour) and plot only the first three dimensions.
Anthony, thank you for reply!
The 3 iterations is all it outputs me with the plot-callback code from the examples/fish_deformable_3D.py. I will try to change the tolerance threshold in class DeformableRegistration(). (However, I don't understand, why the number of shown iterations is only 3: it matches the test Fish data very nicely with the same tolerance threshold.)
Thank you for this comment. Indeed, when I used Iterative closest point (ICP) algorithm, implemented in open3d, for rigid registration it took into account both color and depth information. If I manage to make non-rigid registration work better than Affine transform, I will definitely try to add color-matching restriction as you recommend.
Good luck :), and keep us posted.
The weird thing is the "converged" result being, subjectively, less accurate than Affine Registration. And looking at the image at Iteration 1, it seems it does not take the provided Affine transform as initialization. Thank you :)
Ya, they use a different metric of what a good registration is. That's why affine/deformable are different.
As for the fact that the registration grows, that is just a kink of how the algorithm works. Applying the affine registration first can (I find does) still help.... even if it doesn't look like it does because it is smaller at the start.
Thank you @gattia for replying to issues. I just want to add a couple of points:
Non-rigid registration techniques have a small capture range. You will often see the phrase rigid followed by non-rigid registration
, affine followed by non-rigid registration
or to account for gross misalignment an affine registration
in the literature. Looking at your point clouds, you definitely did the right thing by performing affine registration first.
Looking at the dog image, the transformation between the two point cloud is articulated, i.e. different sections of the point cloud are transformed using a different rigid transformation. If your transformation model reflects this, you might get better results. See Horaud et al. and the corresponding project page. I believe they provide a Matlab implementation.
It also seems like your point cloud is not a full 3D shape, since only the frontal view of the dog is captured. In addition it seems that the spatial distribution of the point cloud is not even. This is important because expectation maximization is prone to local minima (see Fig 1. of Jian and Vemuri), and the issue is exasperated in the presence of symmetries and repeated patterns. One way to increase robustness is to supply additional information, so you should definitely follow @gattia's advice and incorporate individual colours as a fourth dimension of your point cloud.
You have only performed non-rigid registration for three iterations, so I am not sure what will happen if you let it to run longer by tweaking the stopping criteria. You should definitely tweak ⍺
and β
parameters, as discussed in the original CPD paper:
Parameters λ (⍺ in the code) and β both reflect the amount of smoothness regularization. A discussion on the difference between λ and β can be found in [29], [30]. Briefly speaking, parameter β defines the model of the smoothness regularizer (width of smoothing Gaussian filter in (20)). Parameter λ represents the trade-off between the goodness of maximum likelihood fit and regularization.
Dear Siavash, thank you for reply.
4) Changing tolerance from 0.001 to 0.0001 did help! Now the optimization process runs for 18 iterations and the result is comparable to Affine Registration.
Iterations:
Comparison to Affine:
So, it looks more accurate in the hands part, but less accurate in the legs. I will continue experimenting with the alpha, beta and tolerance values...
3) The point clouds represent only the front part of the 3D shape, since these are experimental data captured with a Kinect depth camera (I took them from Deformable 3D Reconstruction Dataset ).
In my research project I need to implement tracking of a moving subject's surface - using color and depth data captured with a depth camera (Intel RealSense D435). By tracking I mean that transformation of point clouds into one another at successive time moments is found. The tracking procedure should somehow track a point's id - save information about which point of the deformed surface moved where. I am considering different approaches now:
2) Thank you for the link, I have downloaded the MATLAB code of this project, and will try running it with Octave later, it's a pity it is not in Python. Also I am not sure that the Dog's transformation is articulated: it is soft and probably changed it's shape smoothly.
It seems like you know what you are doing. If there is anything that we can help you with, please feel free to continue the discussion. Do you think the issue is resolved?
Thank you, I think the issue is resolved, if you have any ideas about the described research problem based on your experience, I would be grateful to learn them. Before closing the issue, the last question: is there possibly a way of parallelizing the Non-rigid Deform calculation?
I don't think there is any way you could parallelize it because each step is dependent on the previous step.
If you are looking at speeding it up you could try the low-rank implementation that is implemented in the development branch. It is considerably faster. See link:
https://github.com/siavashk/pycpd/blob/development/examples/fish_deformable_3D_lowrank.py
Depends on what you mean by parallelization. The algorithm is a series of expectation followed by maximization steps. As @gattia mentioned, you can't really do the maximization step before you have done the expectation step, so concurrency does not help you here.
But you might be able to get a speed up if you implement the entire algorithm on the GPU. I am emphasizing entire because communication between CPU and GPU is a bottleneck when it comes to performance. If you do so, you might be able to get a speed up during individual steps of the algorithm. For example, if you use cuSolver to solve the linear system, you will probably get a speed up.
Is this still an issue?
No, it works now, thanks.
Hi! I'm exploring your excellent library, tried to apply 3D Deformation Registration to my data. The data are the point clouds that I generated from the Snoopy dataset, they can be downloaded here: https://drive.google.com/drive/folders/1_jVk7gNw1p34lU1d_AKBvrEoYTKwDipG?usp=sharing I read and downsample the pointclouds with Open3d:
The number of points is the following: static - (329, 3); moving - (331, 3).
Then I follow your recommendations from another issue and first perform Affine registration, which matches the point clouds nicely. The following Deform Registration doesn't seem to enhance the matching. (The code copied from: https://github.com/siavashk/pycpd/issues/41 )
Changing the alpha and beta parameters does not help: the smaller any of them - the more points tend to shrink to the center.
Could you possibly tell is this method of non-rigid registration applicable to such point clouds, how could the result be improved?