nilearn / nistats

Modeling and statistical inference on fMRI data in Python
BSD 3-Clause "New" or "Revised" License
95 stars 55 forks source link

How to interpret labels and estimates returned from run_glm? #401

Closed dangom closed 4 years ago

dangom commented 4 years ago

If I run a glm a get labels and estimates. What do their numbers mean and how do I interpret the regression results?

 In [168] labels
Out [168] array([0.83, 0.82, 0.77, 0.86, 0.89, 0.91, 0.9 , 0.92, 0.92, 0.92, 0.92,
                 0.92, 0.92, 0.92, 0.92, 0.92, 0.91, 0.9 , 0.89, 0.87, 0.85, 0.8 ,
                 0.83, 0.86, 0.86, 0.89, 0.9 , 0.92, 0.92, 0.92, 0.91, 0.92, 0.92,
                 0.92, 0.92, 0.92, 0.91, 0.89, 0.9 , 0.86])

 In [169] estimates
Out [169] {0.77: <nistats.regression.RegressionResults at 0x145e38320>,
           0.8: <nistats.regression.RegressionResults at 0x159164d68>,
           0.82: <nistats.regression.RegressionResults at 0x1591641d0>,
           0.83: <nistats.regression.RegressionResults at 0x159164668>,
           0.85: <nistats.regression.RegressionResults at 0x159164fd0>,
           0.86: <nistats.regression.RegressionResults at 0x1591640b8>,
           0.87: <nistats.regression.RegressionResults at 0x1591647f0>,
           0.89: <nistats.regression.RegressionResults at 0x159164940>,
           0.9: <nistats.regression.RegressionResults at 0x159164048>,
           0.91: <nistats.regression.RegressionResults at 0x159164160>,
           0.92: <nistats.regression.RegressionResults at 0x1591645c0>}
bthirion commented 4 years ago

Dear dangom,

Maybe this discussin would be better on Neurostars. run_glm is a low-level function: obviously its ouptut is not meant to be dealt with directly. We only used it in nistats atm because there's no high-level function to deal with surface-based GLM (we this we need a Surfacemasker object in nilearn that is not there yet).

glms are indexed by the noise model. More precisely, we use the AR(1) coefficients of models residuls to group the voxels into classes. For each class, the whitening procedure, hence the AR(1) GLM computations are fixed.

hence, for each for this labels, you obtain a RegressionResults that exactly yields the summary statistics you need for contrast computations.

These things have to be given to a function that will perform contrast handling --as in the surface-based GLM example.

Does that make sense ? I agree that this is ugly and hacky.

dangom commented 4 years ago

Thanks bthirion. That makes sense.

As in the example given in the documentation, I noticed I can get the betas I'm after by running:

con = compute_contrast(labels, estimates, contrast_val, contrast_type="t")
con.effect

But I guess end-users shouldn't be using either run_glm nor compute_contrast directly, as you say. I'll close this issue and reopen a discussion in neurostats if I have other questions. As always, thanks for sharing the project and for the quick response.

jeromedockes commented 4 years ago

run_glm is a low-level function: obviously its ouptut is not meant to be dealt with directly. We only used it in nistats atm because there's no high-level function to deal with surface-based GLM (we this we need a Surfacemasker object in nilearn that is not there yet).

Is this something that we should consider? Indeed very little would be required to allow this example to use FirstLevelModel instead of the low-level functions.

bthirion commented 4 years ago

We should consider this, but this has to rely on a proper SurfaceMasker I guess.

kchawla-pi commented 4 years ago

Let's spec it out, what all do we need to implement a SurfaceMasker in a new issue in Nilearn. Whenever we decide to work on it, it'll be a good document to start at.

jeromedockes commented 4 years ago

We should consider this, but this has to rely on a proper SurfaceMasker I guess.

would that masker map from volume to surface? (but what would be the inverse_transform?) or should we add proper support for surface data in nilearn (but the nibabel documentation for Gifti images is somewhat cryptic)?

bthirion commented 4 years ago

I mean the second one: add proper support for surface data in nilearn.

would that masker map from volume to surface? (but what would be the inverse_transform?) or should we add proper support for surface data in nilearn (but the nibabel documentation for Gifti images is somewhat cryptic)?

— You are receiving this because you commented. Reply to this email directly, [ https://github.com/nistats/nistats/issues/401?email_source=notifications&email_token=AABZHVRRJZF7JK7SPQKJMSLQOBB3ZA5CNFSM4I7PH3Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEA7OEUY#issuecomment-540992083 | view it on GitHub ] , or [ https://github.com/notifications/unsubscribe-auth/AABZHVS3OINF7PASGWXCKYLQOBB3ZANCNFSM4I7PH3ZQ | unsubscribe ] .