Closed dangom closed 4 years ago
Dear dangom,
Maybe this discussin would be better on Neurostars. run_glm is a low-level function: obviously its ouptut is not meant to be dealt with directly. We only used it in nistats atm because there's no high-level function to deal with surface-based GLM (we this we need a Surfacemasker object in nilearn that is not there yet).
glms are indexed by the noise model. More precisely, we use the AR(1) coefficients of models residuls to group the voxels into classes. For each class, the whitening procedure, hence the AR(1) GLM computations are fixed.
hence, for each for this labels, you obtain a RegressionResults that exactly yields the summary statistics you need for contrast computations.
These things have to be given to a function that will perform contrast handling --as in the surface-based GLM example.
Does that make sense ? I agree that this is ugly and hacky.
Thanks bthirion. That makes sense.
As in the example given in the documentation, I noticed I can get the betas I'm after by running:
con = compute_contrast(labels, estimates, contrast_val, contrast_type="t")
con.effect
But I guess end-users shouldn't be using either run_glm
nor compute_contrast
directly, as you say. I'll close this issue and reopen a discussion in neurostats if I have other questions. As always, thanks for sharing the project and for the quick response.
run_glm is a low-level function: obviously its ouptut is not meant to be dealt with directly. We only used it in nistats atm because there's no high-level function to deal with surface-based GLM (we this we need a Surfacemasker object in nilearn that is not there yet).
Is this something that we should consider? Indeed very little would be required to allow this example to use FirstLevelModel instead of the low-level functions.
We should consider this, but this has to rely on a proper SurfaceMasker I guess.
Let's spec it out, what all do we need to implement a SurfaceMasker in a new issue in Nilearn. Whenever we decide to work on it, it'll be a good document to start at.
We should consider this, but this has to rely on a proper SurfaceMasker I guess.
would that masker map from volume to surface? (but what would be the inverse_transform?) or should we add proper support for surface data in nilearn (but the nibabel documentation for Gifti images is somewhat cryptic)?
I mean the second one: add proper support for surface data in nilearn.
would that masker map from volume to surface? (but what would be the inverse_transform?) or should we add proper support for surface data in nilearn (but the nibabel documentation for Gifti images is somewhat cryptic)?
— You are receiving this because you commented. Reply to this email directly, [ https://github.com/nistats/nistats/issues/401?email_source=notifications&email_token=AABZHVRRJZF7JK7SPQKJMSLQOBB3ZA5CNFSM4I7PH3Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEA7OEUY#issuecomment-540992083 | view it on GitHub ] , or [ https://github.com/notifications/unsubscribe-auth/AABZHVS3OINF7PASGWXCKYLQOBB3ZANCNFSM4I7PH3ZQ | unsubscribe ] .
If I run a glm a get labels and estimates. What do their numbers mean and how do I interpret the regression results?