nighres / nighres

Processing tools for high-resolution neuroimaging
http://nighres.readthedocs.io/en/latest/
Apache License 2.0
61 stars 42 forks source link

current status #83

Closed ingo-m closed 5 years ago

ingo-m commented 6 years ago

Hi! I've noticed that there has been a lot of activity on the nighres repo lately :) I've lost track a bit regarding the current status, therefore I made a list of features that I wanted to try out (actually all the features that would be needed to perform a depth sampling analysis including retinotopic ROI selection on the surface), perhaps you could confirm that I didn't miss anything?

CBS feature Implemented
get volume yes
probability to level set yes
volumetric layering yes
profile sampling yes
surface mesh inflation no
surface mesh mapping no

By the way, I tested the first four features on a previous version of nighres and it worked fine for me.

Greetings from Maastricht

piloubazin commented 6 years ago

Hi Ingo,

sorry for the slow response, we've been running around quite a lot with the end of the year. You are correct with regard to your list so far. The meshing and inflation algorithms are giving me a bit of extra work because the CBS Tools was using external libraries from JIST for the marching cubes algorithm, which I have to replace somehow. Other modules for brain segmentation, registration, and profile-based analysis are in the pipeline; let me know if you have specific wishes from existing CBSTools modules and I can prioritize. Note also that any module in the CBSTools core/ sub-directory is ready to wrap in Nighres, and you're welcome to give it a try yourself :)

ingo-m commented 6 years ago

Hi Pilou,

Thanks for the info (and sorry for my slow reply).

I've been trying to understand the structure of nighres a bit better, but there is one thing I'm not sure about:

If I understand it correctly, nighres accesses CBS tools functions through a java virtual machine, right? (Sorry if I got this wrong, I know just about nothing about java.) For example, in nighres/nighres/surface/probability_to_levelset.py, line 55:

cbstools.initVM(initialheap='6000m', maxheap='6000m')

Subsequently, the java VM is accessed in form of a python object:

prob2level = cbstools.SurfaceProbabilityToLevelset()

Data is then passed to the python object prob2level. But what I don't really understand is where the interface to java CBS-tools is located. There always seems to be an import cbstools statement. Does this refer to nighres/cbstools/__init__.py? Is that the interface?

And, one more question: The reason why you are suggesting to use a docker image is that the java side of CBS-tools needs to be compiled, right?

piloubazin commented 6 years ago

The library works (for now) as a wrapper for Java code. Rather than recoding all my previous packages into slower and buggier python, we built an infrastructure around them. So everything still runs Java internally, and runs exactly the same way as the CBS Tools modules, because they are in fact the same.

So, what nighres does is to import images into properly formatted types, start Java, pass the data to the corresponding CBS Tools module, and then reformat the results into nice python objects.

Using the Docker interface is an easy way to get everything set up, otherwise you need Java 8 to run the modules and python's JCC to build the CBS Tools python links.

ingo-m commented 6 years ago

Are there any plans to have a cbs-tools/nighres related brainhack event this year?

piloubazin commented 6 years ago

It's a bit tough to say right now from my end, a bit too many things to keep organized together. I'll see if I can set up something for Brainhack Global, but I might also end up joining a local event in France. If you guys want a nighres-themed brainhack at some point we could also organize that outside of the global event, so let me know.

juhuntenburg commented 6 years ago

I'm currently a bit out because of my new job. But if there would be a Nighres related project at a brainhack I could try to make sure I can help out a bit remotely on those days.

In the future it would be cool if we can bring everyone interested in working on it together at an existing brainhack (no additional organization required and potentially more people to join). I hear brainhacks in Paris are to happen more and more frequently so maybe that would be an option to join at some point? Or in case you are planning another one in Amsterdam at some point Pilou?

ingo-m commented 6 years ago

@juhuntenburg congratulations on your new job!

ingo-m commented 6 years ago

I keep wondering about an aspect of depth sampling, I would be interested in your oppinion.

When constructing cortical depth levels from typical high resolution fMRI voxels (e.g. 0.8 mm iso), we inevitably need to oversample. This is usually done by linear interpolation (or, in case of the spatial GLM, by approximating volume ratios of voxels with respect to layers). However, it seems to me that the contribution of a voxel is assumed to be linearly dependent on distance to the layer (or, in case of spatial GLM, a cubic model of the voxel is used). Wouldn’t it make more sense (from a physics perspective) to model the voxel as a 3D Gaussian, where a voxel’s contribution to a layer decreases not linearly, but with a Gaussian centred on the voxel’s centre?

piloubazin commented 6 years ago

Ideally, you would want to use the point spread function as a continuous representation of the voxel measurement. Its shape and sizes depend on the MR sequence, acquisition parameters etc, but you could indeed approximate it with a Gaussian function as you suggest. Of course, when measuring relative contributions, assuming the psf is locally invariant, this is not so different from voxel partial volumes used in the GLM approach. The linear interpolation is definitely not as good, though.

juhuntenburg commented 5 years ago

@ingo-m I am cleaning up issues, can I close this?