oesteban / RegSeg

RegSeg is a simultaneous segmentation and registration method that uses active contours without edges (ACWE) extracted from structural images. The contours evolve through a free-form deformation field supported by the B-spline basis to optimally map the contours onto the data in the target space.
MIT License
12 stars 9 forks source link

Energy Measurement #49

Closed oesteban closed 11 years ago

oesteban commented 11 years ago

Implement the GetValue() method

oesteban commented 11 years ago

I have some doubts about this issue:

To start with, I have the feeling that the natural way to measure the energy is applying the energy integral discretized over the target features grid. This means that we do not need to resample, and this is a nice feature.

But, there is a huge problem with boundaries. In the regions close to the contour, we need to measure somehow the tissue volume included on each limiting ROI. And this is not trivial because we need to intersect the voxel with the surface. Even in the simplest case (only 1 face), this is not easy. If we don't take this into account, I'm affraid we would be losing the necessary sensitivity.

A second option would be to create a very high resolution grid with the same physical extent that the target images (FA,MD). Then, we would binarize the inside of the contours in that grid. Finally, downsampling with interpolation would give us a fuzzy version of the mask in the target grid (FA,MD). This way we would be achieving a measurement of the PV. IMHO, this is very related to Marching Cubes algorithms and this stuff, but I'm completely ignorant of this.

Probably, there are many more ways to work it out. E. g. directly upsampling with a good interpolation the target features (FA,MD) and computing there with binary masks.

Please, let me know (@zosso, @meribach) what is your opinion. Maybe there exists a standard way to do this and I don't know it.

Additionally, I recall that this is important also for the demonstration of duality with the probabilistic approach, given that here we are facing the continuous vs. discretized approaches.

zosso commented 11 years ago

This is an issue indeed, but careful: The evaluation of the variational energy involves volumetric domain integrals, not on the surface; this is eq (5) in the submitted IPMI paper.

discretizing in an oversampled DW space would be ideal; however, the sampling changes at any iteration, so this is very painful. In contrast, I'd suggest sampling the regions in T1 space, at the intrinsic resolution of T1 images. Then, the "corresponding" sampling in DW is achieved by warping. The warped sampling points then interpolate the features in DW space, such that (5) can be computed. There are most likely ITK filters available for most of these things. Worst case, you define the (volumetric) samples in T1 as Point-Data, to be transformed by u, then interpolating the features in DW, then computing squared distances, then summing over all points;

The reasoning is the following: your contours are not better defined than the resolution of T1 images; there is no need to go higher in resolution for the quadrature of (5). You might want to consider including the determinant of the Jacobian of u in this, since you are effectively performing a change of variables. Indeed you should work this out theoretically by looking at the change of variables x --> z; x = z+u(z) in the domain integrals, where x is the space variable in DW, and z is the space variable in T1.

Oh--> And please do write down ALL your theoretical reasonings, results etc. in a latex paper, so that we all can keep track of what exactly your implementing....

oesteban commented 11 years ago

This is an issue indeed, but careful: The evaluation of the variational energy involves volumetric domain integrals, not on the surface; this is eq (5) in the submitted IPMI paper.

Yes, I got it since the beginning ;).

discretizing in an oversampled DW space would be ideal; however, the sampling changes at any iteration, so this is very painful. In contrast, I'd suggest sampling the regions in T1 space, at the intrinsic resolution of T1 images. Then, the "corresponding" sampling in DW is achieved by warping. The warped sampling points then interpolate the features in DW space, such that (5) can be computed. There are most likely ITK filters available for most of these things. Worst case, you define the (volumetric) samples in T1 as Point-Data, to be transformed by u, then interpolating the features in DW, then computing squared distances, then summing over all points; The reasoning is the following: your contours are not better defined than the resolution of T1 images; there is no need to go higher in resolution for the quadrature of (5). You might want to consider including the determinant of the Jacobian of u in this, since you are effectively performing a change of variables. Indeed you should work this out theoretically by looking at the change of variables x --> z; x = z+u(z) in the domain integrals, where x is the space variable in DW, and z is the space variable in T1.

The approach sounds good to me, and we also start facing the modulation problem. We can start with the Jacobian on the features, but I was thinking on improving this when unwarping the original signal. We can take some ideas of Angular Interpolation (AI) from the paper I recommended you on dMRI-dMRI registration. I will open this discussion soon.

Oh--> And please do write down ALL your theoretical reasonings, results etc. in a latex paper, so that we all can keep track of what exactly your implementing....

No problem :).

zosso commented 11 years ago

OK, sorry for my misunderstanding.

oesteban commented 11 years ago

I did not commit the code yet, though now the basis of it is working...

But the time it takes to check if one point is inside the surface is prohibitive. Even if I downsample (yes, it is crazy) the FA, MD space, there are still too many samples to test.

The only solution that should be fast would be to have binary masks (at high resolution) of the inside of the surfaces. My first approach was to generate it first from the surface and then cache it. But it also takes a lot of time (with the advantage that you just compute it only once).

If we want to make it fast, we need to start with a binary mask of the ROIs. This is not very good if we do not go for very high resolutions compared to the surface.

The other approach would be to provide our registration tool with binary masks in T1 space, rather than surfaces. Then, the first step would be to apply MarchingCubes to get the surfaces. But I cannot guess the implications of this change. It is easy to implement, but we rely on the algorithms packed with ITK and I don't know the time it takes either.

What do you think?

EDIT: before trying something different, I will try to define the bounding box of the surfaces and compute the energy only inside. Let's see if it improves anything over the brute force of checking all pixels.

zosso commented 11 years ago

I'm not sure whether I understand correctly.

I feel this somehow relates to what I posted before: "I'd suggest sampling the regions in T1 space, at the intrinsic resolution of T1 images. Then, the "corresponding" sampling in DW is achieved by warping. The warped sampling points then interpolate the features in DW space, such that (5) can be computed. There are most likely ITK filters available for most of these things. Worst case, you define the (volumetric) samples in T1 as Point-Data, to be transformed by u, then interpolating the features in DW, then computing squared distances, then summing over all points;"

No?

Why do you have to check whether a point is inside a surface (volume?)? Can't you just sample the T1 space (well yes, here you have to sample the volumes inside the surfaces, but you do this only once so it shouldn't be prohibitive...), put those points in an appropriate ITK structure (s.a. http://www.itk.org/Doxygen/html/classitk_1_1PointSet.html), warp them using 'u', and interpolate the DW data using the deformed pointset? (that is what the ITK registration algorithm would do, at every iteration....)

I'm sorry if I miss the point....

zosso commented 11 years ago

I just realize: You need to do the whole sampling exercise anyway, in order to properly re-estimate the region statistics...

oesteban commented 11 years ago

You understood perfectly. So did I :D.

In any case, the high resolution space (higher to T1, the same as T1, between dMRI and T1... it doesn't matter anyway) needs to be sampled. And, it takes a lot to check if every sample is inside the region (maybe because I'm working on debug or because ITK is not very efficient on this).

This is what I do:

  1. Create a sampling grid. I realised that you may want to make it exactly as your T1, or maybe not. It is not important if you are not using binary masks taken from freesurfer. Obviously, if you have some extra information in a grid, you probably want to keep this sampling (that is, the T1 grid). But currently, we only have the grid of the dMRI and the PointSet. I started with very high density (~5 times the density of the dMRI), but I realised it was too much time. Then I reduced gradually, until I found myself with 0.5 times the density of the dMRI (and this is stupid, just for checking that it worked).
  2. Sample. For each point in the sampling grid, I compute the new location after warping. Then I take the value (that should be modulated with the jacobian, not yet implemented). And compute the energy.

So, what I'm checking out now is to reduce the sampling grid to the actual bounding box of the surface. With WM it could ease the problem a bit... but still a lot.

EDIT: for example, the region containing WM surface takes 1,122,160 samples in my new sampling grid while the whole image extent takes 2,585,450 pixels. Given that I'm not working at a very high resolution (still lower than dMRI, for testing purposes), this will ease a lot the problem but still not enough.

EDIT2: on the other hand, this approach is not efficient with large overlaps between surface bounding boxes of very large regions.

Let's see.

zosso commented 11 years ago

ok. a few comments/suggestions:

You might want to try:

oesteban commented 11 years ago

I was just writing about the Jacobian, exactly for that detail:

  1. I detach this thread from the unwarping module (they are different jacobians :D)
  2. Ok, I will check you suggestion (even though it is close to what I was doing... let's give it a try!)

;) I'll let you know ASAP

oesteban commented 11 years ago

Ok, the TriangleMeshToBinaryImageFilter is the best option, AFAIK. See 586f3e0

oesteban commented 11 years ago

Orientation looks much better now. This is after the itk::TriangleMeshToBinaryImageFilter:

github

Sampling grid size: 508x508x252 voxels Sampling grid resolution: 0.5^3 mm.

oesteban commented 11 years ago

For the Jacobian, I compute the itk::DisplacementFieldJacobianDeterminantFilter (http://www.itk.org/Doxygen/html/classitk_1_1DisplacementFieldJacobianDeterminantFilter.html) of the deformation field in T1 high resolution. It takes a bit longer, but I think it is ok.

I am currently checking that the whole thing is now working. After that, I'll be able to close this issue.