Mouse-Imaging-Centre / RMINC

Statistics for MINC volumes: A library to integrate voxel-based statistics for MINC volumes into the R environment. Supports getting and writing of MINC volumes, running voxel-wise linear models, correlations, etc.; correcting for multiple comparisons using the False Discovery Rate, and more. With contributions from Jason Lerch, Chris Hammill, Jim Nikelski and Matthijs van Eede. Some additional information can be found here:
https://mouse-imaging-centre.github.io/RMINC
Other
22 stars 17 forks source link

smoothing on cortex #277

Closed DJFernandes closed 4 years ago

DJFernandes commented 4 years ago

Added functions that allow for smoothing on manifolds like the cortex. Is this something RMINC should have?

gdevenyi commented 4 years ago

Amazing!

gdevenyi commented 4 years ago

Is this a port of http://brainimaging.waisman.wisc.edu/~chung/lb/ or a different implementation?

cfhammill commented 4 years ago

Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).

Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available

DJFernandes commented 4 years ago

Is this a port of http://brainimaging.waisman.wisc.edu/~chung/lb/ or a different implementation?

Same idea. I got the equations from wikipedia https://en.wikipedia.org/wiki/Discrete_Laplace_operator

I didn't use eigenfunctions though cause I found the memory needed for sufficient accuracy was not a worthwhile tradeoff. Instead, the solution is numerical with some finite time-step. It should perform well with the meshes and blurring we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)

gdevenyi commented 4 years ago

we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)

CIVET blurs at 10-30mm FWHM default and power analysis indicates this is a good scale: https://www.bic.mni.mcgill.ca/~jason/reprints/Lerch_popsim.pdf

Does this code work okay for these scales?

gdevenyi commented 4 years ago

numerical with some finite time-step.

I believe that is a similar implementation to CIVET's depth_potential then: https://github.com/aces/mni_surfreg/blob/master/tools/depth_potential.c

DJFernandes commented 4 years ago

Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).

Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available

I removed the man pages and made the functions internal. Here is a gif of the smoothing in action on some artificial data heat_sim_scaled I produced an rmd example here: /micehome/dfernandes/Documents/test_cortex_smoothing

DJFernandes commented 4 years ago

we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)

CIVET blurs at 10-30mm FWHM default and power analysis indicates this is a good scale: https://www.bic.mni.mcgill.ca/~jason/reprints/Lerch_popsim.pdf

Does this code work okay for these scales?

It should. Human cortical surface area is ~2000 times more than mice, so an equivilant blurring kernel is ~0.2*sqrt(2000) ~10mm How many vertices do you typically use?

DJFernandes commented 4 years ago

numerical with some finite time-step.

I believe that is a similar implementation to CIVET's depth_potential then: https://github.com/aces/mni_surfreg/blob/master/tools/depth_potential.c

Yea, this looks similar

gdevenyi commented 4 years ago

CIVET's surfaces are 40962 vertices for low res and 163842 for high-res.

cfhammill commented 4 years ago

Very cool, thanks Darren. Can you think of a way to automate testing the behaviour, like is there a gold standard we can compare against, if not we can just take current behaviour as a reference.

DJFernandes commented 4 years ago

CIVET's surfaces are 40962 vertices for low res and 163842 for high-res.

The low res should be fine. I have never tested the high-res but 'back-of-the-envelope' calculations seem to suggest there shouldn't be any memory issues (I calculated that the operator's sparse-matrix should be ~20MB in the high-res case)

DJFernandes commented 4 years ago

Very cool, thanks Darren. Can you think of a way to automate testing the behaviour, like is there a gold standard we can compare against, if not we can just take current behaviour as a reference.

I am sure someone has solved it's eigenfunctions for a sphere, and I think I can at least do it numerically with a sufficient degree of confidence. That could be a good gold standard.

gdevenyi commented 4 years ago

I believe this is technically a exact solution for a large enough number of eigenvalues: http://brainimaging.waisman.wisc.edu/~chung/lb/

DJFernandes commented 4 years ago

I believe this is technically a exact solution for a large enough number of eigenvalues: http://brainimaging.waisman.wisc.edu/~chung/lb/

Agree. It would be a good benchmark

DJFernandes commented 4 years ago

Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).

Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available

@cfhammill @gdevenyi I randomly picked eigenvectors and created an initial scalar field for testing. Since I know which eigenvectors I picked, I can analytically determine the time-evolution of the field and compare it to the numerical results. Here is an example:

Here is the unblurred field: image

Here is the numerical blurring: image

Here is the analytical solution: image

Here is a comparison of the numerical and analytical values are each vertex: image

Clearly the analytical image is sharper, but the numerical approximation is ok for my purposes. Please feel free to play around with the RMarkdown document located here. /micehome/dfernandes/Documents/test_cortex_smoothing/benchmark_19mar2020.Rmd

cfhammill commented 4 years ago

Amazing work, thanks @DJFernandes. I created a new PR (#278) of your master branch onto RMINC develop. From there I can start adding automatic tests to our test suites. Closing this one, and merging the other.