Closed DJFernandes closed 4 years ago
Amazing!
Is this a port of http://brainimaging.waisman.wisc.edu/~chung/lb/ or a different implementation?
Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).
Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available
Is this a port of http://brainimaging.waisman.wisc.edu/~chung/lb/ or a different implementation?
Same idea. I got the equations from wikipedia https://en.wikipedia.org/wiki/Discrete_Laplace_operator
I didn't use eigenfunctions though cause I found the memory needed for sufficient accuracy was not a worthwhile tradeoff. Instead, the solution is numerical with some finite time-step. It should perform well with the meshes and blurring we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)
we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)
CIVET blurs at 10-30mm FWHM default and power analysis indicates this is a good scale: https://www.bic.mni.mcgill.ca/~jason/reprints/Lerch_popsim.pdf
Does this code work okay for these scales?
numerical with some finite time-step.
I believe that is a similar implementation to CIVET's depth_potential then: https://github.com/aces/mni_surfreg/blob/master/tools/depth_potential.c
Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).
Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available
I removed the man pages and made the functions internal. Here is a gif of the smoothing in action on some artificial data I produced an rmd example here: /micehome/dfernandes/Documents/test_cortex_smoothing
we typically use in cortical thickness analysis (i.e. ~40000 sparsely-connected vertices, 0.2mm FWHM blurring kernels)
CIVET blurs at 10-30mm FWHM default and power analysis indicates this is a good scale: https://www.bic.mni.mcgill.ca/~jason/reprints/Lerch_popsim.pdf
Does this code work okay for these scales?
It should. Human cortical surface area is ~2000 times more than mice, so an equivilant blurring kernel is ~0.2*sqrt(2000) ~10mm How many vertices do you typically use?
numerical with some finite time-step.
I believe that is a similar implementation to CIVET's depth_potential then: https://github.com/aces/mni_surfreg/blob/master/tools/depth_potential.c
Yea, this looks similar
CIVET's surfaces are 40962 vertices for low res and 163842 for high-res.
Very cool, thanks Darren. Can you think of a way to automate testing the behaviour, like is there a gold standard we can compare against, if not we can just take current behaviour as a reference.
CIVET's surfaces are 40962 vertices for low res and 163842 for high-res.
The low res should be fine. I have never tested the high-res but 'back-of-the-envelope' calculations seem to suggest there shouldn't be any memory issues (I calculated that the operator's sparse-matrix should be ~20MB in the high-res case)
Very cool, thanks Darren. Can you think of a way to automate testing the behaviour, like is there a gold standard we can compare against, if not we can just take current behaviour as a reference.
I am sure someone has solved it's eigenfunctions for a sphere, and I think I can at least do it numerically with a sufficient degree of confidence. That could be a good gold standard.
I believe this is technically a exact solution for a large enough number of eigenvalues: http://brainimaging.waisman.wisc.edu/~chung/lb/
I believe this is technically a exact solution for a large enough number of eigenvalues: http://brainimaging.waisman.wisc.edu/~chung/lb/
Agree. It would be a good benchmark
Thanks Darren! Ditto to Gabe's question. I at one point implemented the laplace beltrami operator but was forced to use R's interface to octave to do it (needed generalized SVD).
Can I propose that you drop the man page edits from the commit? Also can we collectively think of a way to add some tests that this is doing the right thing? There is a heat kernel smoother in CIVET we can use if it's available
@cfhammill @gdevenyi I randomly picked eigenvectors and created an initial scalar field for testing. Since I know which eigenvectors I picked, I can analytically determine the time-evolution of the field and compare it to the numerical results. Here is an example:
Here is the unblurred field:
Here is the numerical blurring:
Here is the analytical solution:
Here is a comparison of the numerical and analytical values are each vertex:
Clearly the analytical image is sharper, but the numerical approximation is ok for my purposes. Please feel free to play around with the RMarkdown document located here. /micehome/dfernandes/Documents/test_cortex_smoothing/benchmark_19mar2020.Rmd
Amazing work, thanks @DJFernandes. I created a new PR (#278) of your master branch onto RMINC develop. From there I can start adding automatic tests to our test suites. Closing this one, and merging the other.
Added functions that allow for smoothing on manifolds like the cortex. Is this something RMINC should have?