Open nicholst opened 6 years ago
Hi @nicholst ,
Do you have any example data that could be used with these functions, e.g. some inputs to palm_ciftiwrite
. My understanding is that, currently, the .mat
input we have stores multiple volumes in some sort of triangular mesh. A secondary file is needed for the wild boostrap which stores structural information about the mesh. Would this be the same layout for cifti
images, or could it be like niftis
where there is one file per volume and the header includes the mesh information?
Can you connect with Habib and/or look at how he supports CIFTI in NINGA? CIFTI, is a very general and complex format, and it is not feasible to support every facet of it, but he must have chosen some important subset of CIFTI types to support.
I have committed to provide cifti support by end of January. @TomMaullin please ping me about this in Jan so we can talk with Habib and possibly Anderson Winkler about what we need to integrate PALM CIFTI code.
@nicholst, I am trying to work on this as you mentionned that it was relatively urgent. However, as I am not familiar with this type of data, I am not sure how people are using it. From what I can understand, it contains both volumetric data and surface data. Does the analysis is intended to be performed on both type of data in the same time or only one of them?
Hi @BryanGuillaume - Good points. CIFTI is, in it’s whole, highly complex and can represent many many different types of data... surfaces, subcortical volumes and ‘connectomes’, square matrices where each row/column is a ROI. HOWEVER, in the first instance I was only thinking about surface data, and secondarily ‘brainordinates’ data, surface+subcortical.
Here are some links:
And, could I ask @andersonwinkler: What exactly do you support in PALM? Just surfaces? Brainordinates? Connectomes too?
I’ll ask @asoroosh to help you get some HCP cifti data... it might already be on rescomp.
Hi @BryanGuillaume / @nicholst - I don't think we have the CIFTIs on the rescomp. But I don't think downloading them via amazon s3 buckets would be any problem.
Soroosh - Can you arrange a t/c with Bryan to show him how to do this? Bryan - There is a small learning curve and you have to get set up with HCP -- get a login if you haven't already done so https://www.humanconnectome.org/study/hcp-young-adult/data-use-terms (black pen) -- and Amazon Web Services identity, but then you should be able to clone what Soroosh has done.
Sure! I'll get in touch shortly.
Thanks, I will try to get an HCP login asap and get in touch with Soroosh for this.
On a side note, regarding the CIfTI support in SPM, I have looked into the main SPM code and it seems that they have made many changes in the code only for supporting GIfTI inputs (I am currently going through the code to reproduce these changes in SwE), but nothing for CIfTI. Nevertheless, The toolbox FieldTrip, which actually comes with SPM as an external toolbox, seems to have some CIfTI support and I believe it does not rely on wb_command like PALM does and seem to also extract volumetric data from CIfTi (at least, that is what I understand from the help header), which is not possible from the code in PALM. Thus, it might be a good idea to use the functions in FieldTrip for SwE. @nicholst, do you think it is ok to call directly the functions in FieldTrip or should we aim to create our own CIfTI read/write functions?
The dependence on wb_command is a huge headache, yes, but, there is a problem with FieldTrip: As far as I know, it takes whatever CIFTI input you give it and interpolates it to match a standard FieldTrip template. That was the status of a few years ago. And as a result, everyone (outside of FieldTrip) was stuck using wb_command.
I'll ping Guillaume Flandin to see if this is still the case.
@BryanGuillaume I will here summarise what I understand to be the state of play of our CIFTI efforts.
The user can simply supply a list of (consistent) CIFTI images, and we'll provide element-wise:
In addition to a list of (consistent) CIFTI images, they must provide
{L,R}.sphere.32k_fs_LR.surf.gii
Mean.{L,R}.area.func.gii
Then additionally we'll provide
The cluster inference on surfaces, L+R, is straight forward (sometimes called "GrayOrdinate" data). Cluster inference on surfaces plus voxel data ("BrainOrdinate" data) requires matching of marginal distribution of cluster size, which we have decided to do on the basis of median and IQR (or, maybe median and Q2-Q3 distance, to improve fit on the important upper tail) of cluster area^(1/2) and cluster volume^(1/3).
Hence the FWE max distribution will be based on the max of cluster statistics after a transformation like
As lowest priority, we can consider TFCE. Again, on surfaces, should be relatively straight forward. On BrainOrdinate data, we'll need a similar normalisation. While TFCE is usually denoted:
Σt htH e(h)E
it can instead be seen as a weighted P-norm
(Σt htH e(h)E)1/E
Then, as RFT suggests H=2, E=3/2 for volume, and H=2, E=2/2 for surface, these two sets of values can be used in each domain. If life were fair (if RFT worked), no further transformation would be necessary. But is is probably safest to look at some marginal distribution of these weighted-p-norm form of TFCE and see if the are at all comparable (perhaps Box-Cox transformation will suggest some different values?!?), and at a minimum use the same location/scale adjustment as used for cluster size inference.
Hi @nicholst ,
I agree with your summary, except for the following typo:
(volume^(1/2) - v_m) / v_s should be (volume^(1/3) - v_m) / v_s
Currently, I am focusing on the simplest case of parametric analyses and it seems that I can estimate the model properly with CIfTI files. Now, I am starting to look at how I can display the results properly. For the latter, I think that if I detect only surface data or only volumetric data in the CIfTI, I will be able to display the results relatively easily. However, I am still a bit unsure about what to do when I have both types of data. Maybe I could simply open two result windows, one for volumetric data and one for surface data. What do you think about that?
Hi @BryanGuillaume,
Thanks for catching that. I have corrected that typo in the comment above.
On visualisation, remember, people using CIFTI are going to do most of their visualisation in Connectome Workbench, a super-featured tool for surface/HCP data.
So, I think that the following logic will suffice:
It also indicates a complication... in the footer, we say "xxx voxels (xxx mm^3)" right? That will have to be amended. If working only with verticies (no area supplied) a vertex count alone will be given, and if an "metric file" is given, then the total area mm^2 can be given. But if surface+volume, both total area and volume will need to be noted, right?
Note on dtseries
vs dscalar
. For simplicity, I suggest that we only accept a single instance of data from each file. At present, we have, as an example, dtseries
data that has but a single brainordianate image.
I mention this b/c we should be looking out for this:
If a dtseries.nii
image is given that has 2 or more 'instance/subject' dimensions, we need to either
Do you agree @BryanGuillaume?
Regarding last comment, Mike Harms notes that dscalar
files can indeed contain multiple images (eg all the contrasts for a given first level analysis). Hence we need to be thinking near-term how to select out the image of interest.
Note that SPM already has the convention of specifying a “volume” number with a comma. E.g. image001.nii,3
is the third volume (spm_vol
automatically parses this).
Parsing this integer is straightforward. Is it clear how to consistently select out the right surface / surface+volume given this integer?
I will have this in mind. Do you know if we have such a file at our disposal that I could use for testing?
I've asked Mike Harms for some data for testing.
CIfTI support has been added in PR #141. Thus I am closing this issue. Nevertheless, it is worth pointing out that there is still additional work that needs to be done regarding CIfTI files. For example, TFCE has not been implemented yet for such kind of inputs. I will add this as a separate issue.
Given information from Tim Coalson, there is a problem we need to fix: When reading a multi-image CIFTI and extracting just one image from it, we modify the XML to set NumberOfSeriesPoints="1"
but Tim also points out that
Note that if you read in dscalar with multiple maps, you will need to delete all but one "NamedMap" element (and the children) in order to write out a single-map file.
It would seem logical to save the "NamedMap" corresponding to the image/map selected and delete all the others.
@BryanGuillaume is it clear how to implement this? (It makes sense to me, especially looking at the CIFTI XML UML diagram https://www.nitrc.org/forum/attachment.php?attachid=333&group_id=454&forum_id=1955 but I don't know what's actually involved).
I see. I have two ideas about how to implement this, but I would need to try them out tomorrow to see if they are working.
For the "NamedMap" to keep, one issue is that the user could potentially select several contrasts from the same file. Thus, the choice might be biased towards one of them. Anyway, I need to try it in code to see how this is going to work.
Crucial detail I forgot to convey: Tim said this is only something you need to do for dscalar images.
I do see your point about which of several contrasts... but, we also have the arbitrariness of only using the first CIFTI image to clone... right? So I won't worry about that.
I cannot indeed see any "NamedMap" in the XML of dtseries.nii files. Nevertheless, I can see some in the dscalar.nii files I have, but they are all with one row of data. Would you have one dscalar.nii file with multiple contrasts at hand?
I don't... I'll ask for one.
Please look on rescomp... I've just created dscalar equivalents for all of the dtseries images you've used previously.
Low priority, but growing due to growing use of HCP-style analyses.
See PALM for an template: https://github.com/andersonwinkler/PALM/blob/master/palm_ciftiread.m https://github.com/andersonwinkler/PALM/blob/master/palm_ciftiwrite.m
As a warning, it ain't pretty: Functionality is supported (as recommended by HCP folks) by system call to wb_command.