Closed oesteban closed 11 years ago
@zosso: What do you think about this? http://zeus.mat.puc-rio.br/tomlew/pdfs/vector_field_sibgrapi.pdf
It's overkill for our problem. I don't think that at this stage you need adaptive discretizations etc. Keep your parameters on a regular grid, because this is what makes the regularization "simple" (you cannot regularize by smoothing the parameters that easily, if your grid is not regular).
The actual basis function you use is secondary. B-splines are optimal since they have best properties for a given support, so the weight function can be sparsest. But it really doesn't matter that much, I think.
On 4/11/2013 4:56 AM, Oscar Esteban wrote:
@zosso https://github.com/zosso: What do you think about this? http://zeus.mat.puc-rio.br/tomlew/pdfs/vector_field_sibgrapi.pdf
— Reply to this email directly or view it on GitHub https://github.com/oesteban/ACWE-Registration/issues/22#issuecomment-16230078.
http://www.itk.org/Doxygen43/html/classitk_1_1BSplineScatteredDataPointSetToImageFilter.html
Maybe with this filter we can get the BSpline parametization of the field easily, and this is what we want, right?
Na. Your parametrization points u_k (or something like this?) are not scattered, but distributed on a regular grid. So theoretically you can work with the ordinary BSpline interpolator: http://www.itk.org/Doxygen43/html/classitk_1_1BSplineInterpolateImageFunction.html
However, since you need the sparse matrix of weights, W, which remains constant during the whole process and interpolation can be done faster (or as fast), and since you need its inverse somewhere, you can use:
http://www.itk.org/Doxygen43/html/classitk_1_1BSplineInterpolationWeightFunction.html
to set up W as far as I understand.
PS: I just realized that parts of my IJ contributions have found their way into standard ITK: http://www.itk.org/Doxygen43/html/classitk_1_1ComplexBSplineInterpolateImageFunction.html yeah!
Ok, now I am addressing this issue.
If I am not wrong, what I want is:
Once we agree on the procedure, I think I am missing something. The itk::BSplineInterpolateImageFunction
takes as input an image (and we do not have an image, right?). Then, you can compute the value of the image in any off-grid point. BUT, what we want is to base a BSpline at each u_k (what is, to set a control point on each u_k, isn't it?). Then we fit the sparse data of the nodes of the contours with the itk::BSplineScatteredDataPointSetToImageFilter
. This could be done setting to 1.0 the values of the nodes and 0.0 elsewere?. And finally, we get the weights on the defined BSpline grid with the class you suggested.
I don't think I fully understand how to do it :(. Maybe I am just missing how to get the first image that assigns values to u_k.
I think that ITK does not have the required implementation yet. We need to use the radialized cubic tensor product B-spline as weight (RBF), isn't it?
I need to use the B-Spline kernel to compute the weight as if it was the IDW kernel I am currently using, right? (http://www.itk.org/Doxygen/html/classitk_1_1BSplineKernelFunction.html)
I think you're right (with your very last comment); The shape (radial or not) of the basis function (psi?) doesn't matter. At least not a lot. So simply replacing IDW by b-spline kernel should be ok. And I think you can even use the http://www.itk.org/Doxygen43/html/classitk_1_1BSplineInterpolationWeightFunction.html function to compute the complete weight matrix for you...
so no need for scattered point interpolation etc. (although, that could possibly work, too, but quite frankly I think the sparse matrix solution is much faster, because the weights do not change over the iterations)
Nice! :) On Apr 30, 2013 6:39 PM, "zosso" notifications@github.com wrote:
I think you're right (with your very last comment); The shape (radial or not) of the basis function (psi?) doesn't matter. At least not a lot. So simply replacing IDW by b-spline kernel should be ok. And I think you can even use the http://www.itk.org/Doxygen43/html/classitk_1_1BSplineInterpolationWeightFunction.htmlfunction to compute the complete weight matrix for you...
so no need for scattered point interpolation etc. (although, that could possibly work, too, but quite frankly I think the sparse matrix solution is much faster, because the weights do not change over the iterations)
— Reply to this email directly or view it on GitHubhttps://github.com/oesteban/ACWE-Registration/issues/22#issuecomment-17238623 .
This is the implementation of the 3rd order B-Spline kernel in ITK:
inline double Evaluate ( const Dispatch<3>&, const double& u) const {
double absValue = vnl_math_abs( u );
double sqrValue = vnl_math_sqr( u );
if ( absValue < 1.0 ) {
return ( 4.0 - 6.0 * sqrValue + 3.0 * sqrValue * absValue ) / 6.0;
} else if ( absValue < 2.0 ) {
return ( 8.0 - 12 * absValue + 6.0 * sqrValue - sqrValue * absValue ) / 6.0;
} else {
return 0.0;
}
}
And this my (working) simple implementation (IDW, inverse distance weighting 2nd order):
wi = ( u < vnl_math::eps) ? 1.0 : (1.0 / vcl_pow(u, 2));
So, to make them compatible, everything goes around u
, the radius with respect the center of the kernel. I propose to normalize this value depending on the density of the sampling grid. This way we could control the support region of the kernel in terms of voxels instead of physical units, what do you think, @zosso ?
Until we get a conclusion to this, I'll be implementing it without normalization (that means, very little support regions). It will be easy to add this normalizer afterwards.
Note: if we needed a >3rd order B-Splines, we have to implement it (bc it's not available). I do not think it'd be hard, just another task.
Short question for @zosso:
When I want to deform a high resolution image (e.g 128x128x48), should I:
I guess that if we used 1, we need to compute again weights etc. and we wanted our kernel width to depend on the density of the grid... so we are implicitly creating a more local transform. So the solution should be 2 or something similar.
Thanks!
For warping, the same dense deformation field should be used as the one optimized; This means, that the same interpolating basis functions are to be employed. Option 1 it is. Hence the interest of not using a radial basis function, but the fast, separable b-splines kernel already implemented in ITK. Either way, the weights can be precomputed for whatever basis function in exactly the same way as for the sparse W matrix for the contour points.
For warping, the same dense deformation field should be used as the one optimized; This means, that the same interpolating basis functions are to be employed
Then it is option 2, given that in 1 I meant that we recompute weights, with the RBF kernel adapted to the new sampling grid (probably the word again tangled the meaning). So the basis functions change.
Hence the interest of not using a radial basis function, but the fast, separable b-splines kernel already implemented in ITK.
Yes, I have it clear. But I will advance in the remaining lines (if you agree) given that the current implementation works out. BTW, I recently stumbled upon this: http://www.itk.org/Doxygen/html/classitk_1_1ElasticBodySplineKernelTransform.html and the corresponding paper (http://dx.doi.org/10.1109/2945.620490) that might be of interest.
Either way, the weights can be precomputed for whatever basis function in exactly the same way as for the sparse W matrix for the contour points.
Ok, this I get it :)
Reading my last suggestion (EBSs) I see that they do not solve our problem.
I will keep using the current implementation, but I open a new issue asking for a new implementation, based on what we have talked here. I will schedule it for a certain future point, where we might have found the limitations of the current (poor) implementation.
Opened #79
Implement TPS for the RBF? or b-splines? or what?