cvnlab / GLMdenoise

26 stars 25 forks source link

Can GLMdenoise be run on surface data? #6

Open billbrod opened 6 years ago

billbrod commented 6 years ago

I would like to run GLMdenoise on surface data (specifically freesurfer .mgz files). Is that possible or does GLMdenoise make some assumptions that require the data to be volumes or something about their spatial relationships to each other? If so, are any tweaks required (e.g., making sure the input data is 4D, with time along the fourth dimension)?

One issue that comes to mind is that there are separate freesurfer .mgz files for the right and left hemisphere; should I just concatenate the data matrices?

I'd like to do this because I save the models, modelmd, and modelse results to nifti files and then project them to the surface. However, the interpolation affects these all in different ways, so that the median of a surface vertex across bootstraps (from models) is no longer the corresponding value on the modelmd surface (and similarly for modelse; the difference for the median is small, but it's large for the standard errors). Running GLMdenoise on surface data seems like the best way around this, but I'd welcome any other suggestions! (Currently, my plan is to ignore the projected modelmd and modelse and recompute the median and standard errors from models)

kendrickkay commented 6 years ago

Sorry for the very long delay (I guess I am not getting github notifications for some reason).

It is possible for sure. There aren't really any spatial assumptions (regarding the case of volumes vs. the case of surface data (which are just vectors of voxels).

There is a way to pass data to GLMdenoisedata that specifies vector-oriented data (as opposed to voxels).

Yes, I would just concantenate hemispheres.

One minor issue is that when you pass surface data, GLMdenoise has no access to non-gray matter voxels to generate the noise pool, whereas when you pass volume data, GLMdenoise does have access to those voxels. I'm not sure which way is "better", but I don't think it's a huge deal.

Yes, I agree that interpolation causes tricky "order of operations" issues. The cleanest way to handle interpolation to surface is to do it up front and very early on (prior to analyzing the time-series). I agree that interpolation of statistical errors is weird and has no clear interpolation. (Altneratively, you could just go with 'nearest' interpolation which does avoid interpretation problems. Happy to Skype to discuss further if you like.

billbrod commented 6 years ago

Thanks, that all makes sense. I may reach out with more questions after I work on this a bit more.

Looking at GLMdenoisedata, I'm not seeing an argument that specifies the data is vector-oriented. Do I just pass the data as a 2d matrix (instead of 4d; collapsing X, Y, and Z, so it's just XYZ by time)?

kendrickkay commented 6 years ago

Yeah, it says "XYZ can be collapsed such that the data are given as a 2D matrix (XYZ x time);". I guess that might be somewhat confusing in the way I wrote it... Basically you can just give it a bunch of vertices like V x time.

billbrod commented 6 years ago

Okay thanks, I'll give that a try.

billbrod commented 6 years ago

It runs on surfaces with no problem, but the images it creates are then un-viewable. Do you have a way that you easily view them? Otherwise we can work on implementing something and adding it to the code so that GLMdenoise detects whether it's a surface and plots appropriately.

kendrickkay commented 6 years ago

Yeah, that's a known issue. Volumes are much easier to view than unordered vector data (i.e. surface data). Not sure what a small-footprint approach would be to visualize surface data. Theoretically you would need the surface file and a viewpoint and all sorts of things like that. Our lab is trying to package together a simple automated surface visualization, but not quite ready. In a sense, even if you can't get the visualization of the surface from the running of GLMdenoise, you can still at least look at the other informative figure outputs. Thoughts?

billbrod commented 6 years ago

I think some simple automated surface visualization would be great. Wouldn't you only need a surface file? In cases like this, you could then save pictures from several default viewpoints (e.g., directly behind the brain, directly in front, to the left, to the right), which would display most of the information. We've been organizing our data in BIDS format and putting the freesurfer directory under derivatives/freesurfer/{subj_name}, so it wouldn't be too hard to find an appropriate surface data, but GLMdenoise probably shouldn't assume that. Would it then make sense to have the two surface files (one for the left hemisphere, one for the right; and the arrays containing their data, not the path to the files) as possible values in the opt structure? Then GLmdenoise could create the visualizations if they're present and skip it if they're not?

Jon and I talked about this a little more, looking at his fs_meshFromSurface code from vistasoft. We wouldn't want to add vistasoft as a requirement, but we could probably do something similar to get a simple visualization of the mesh. How were you all thinking about visualizing it?

kendrickkay commented 6 years ago

Interesting. We have our own approach (https://github.com/kendrickkay/cvncode/blob/master/cvnlookupimages.m) which is certainly not ready as a simple to use tool, but we are trying to make it more portable.

To plug your mesh stuff into GLMdenoise, you can try using the opt.drawfunction input. In fact, we have already hooked an internal version of our surface visualization stuff through opt.drawfunction and it works pretty well

billbrod commented 6 years ago

Your approach definitely sounds more general. Do you think it's ready for us to use? We could put a little wrapper around it and pass that as opt.drawfunction

kendrickkay commented 6 years ago

No, not ready yet unfortunately. But it's good to know that there's at least one person out there who might find it useful!

billbrod commented 6 years ago

Okay. I'm not going to work on this right away, so keep me updated on the progress for cvnlookupimages and I'll let you know if I put something else together.

JWinawer commented 6 years ago

At least 2.

On Tue, Jul 24, 2018 at 8:55 AM Kendrick Kay notifications@github.com wrote:

No, not ready yet unfortunately. But it's good to know that there's at least one person out there who might find it useful!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-407396501, or mute the thread https://github.com/notifications/unsubscribe-auth/ACBX3n-PTUuQIW-hPH6_O-UlBksReD2Pks5uJxlCgaJpZM4Um0ng .

-- Sent from Gmail Mobile

billbrod commented 5 years ago

Noah's put together a python script that converts the pngs to something more intelligible: https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py

From Noah describing how to use it:

I just checked a script to the MRI tools repository: BIDS/GLMdenoisePNGprocess.py It crawls the PNG files in whatever directory you give it and makes flatmap versions of them (output as *_maps.png). It deduces the subject ID from the folder names and uses your SUBJECTS_DIR to get the freesurfer subject (unless there is a freesurfer directory in derivatives, in which case it uses that). It ends up that there is not only an extra row in the images but also one extra column. My assumption is that the last colum is the one that should be ignored (rather than the first column, since the last row is the one that's ignored also), and this appears correct. I went ahead and ran it on wlsubj001 and wlsubj062, so you can check those figure directories for the outputs. Syntax is: python MRI_tools/BIDS/GLMdenoisePNGprocess.py

In order to run it, you'll need to have neuropythy installed along with numpy and scipy (which are pretty standard). pip install neuropythy should be sufficient).

You'll need neuropythy version 0.9.0

kendrickkay commented 5 years ago

Interesting. What is the general code mechanism that creates flatmaps? (And what type of views does it provide?)

Yes, my generic "makeimagestack.m" routine by default creates an extra row and extra column in between different image slices (at the bottom / at the right). In the case of surface data, it's just a gigantic column vector, so it adds one extra row at the end and one extra column at the right.

On Mar 25, 2019, at 11:28 AM, William F. Broderick notifications@github.com wrote:

Noah's put together a python script that converts the pngs to something more intelligible: https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py From Noah describing how to use it:

I just checked a script to the MRI tools repository: BIDS/GLMdenoisePNGprocess.py It crawls the PNG files in whatever directory you give it and makes flatmap versions of them (output as *_maps.png). It deduces the subject ID from the folder names and uses your SUBJECTS_DIR to get the freesurfer subject (unless there is a freesurfer directory in derivatives, in which case it uses that). It ends up that there is not only an extra row in the images but also one extra column. My assumption is that the last colum is the one that should be ignored (rather than the first column, since the last row is the one that's ignored also), and this appears correct. I went ahead and ran it on wlsubj001 and wlsubj062, so you can check those figure directories for the outputs. Syntax is: python MRI_tools/BIDS/GLMdenoisePNGprocess.py

In order to run it, you'll need to have neuropythy installed along with numpy and scipy (which are pretty standard). pip install neuropythy should be sufficient).

You'll need neuropythy version 0.9.0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-476275538, or mute the thread https://github.com/notifications/unsubscribe-auth/ACmhkuAf0c5KLFc4i22ZYkkMIpL2oDKXks5vaPlBgaJpZM4Um0ng.

JWinawer commented 5 years ago

Adding Noah to the email.

On Mon, Apr 1, 2019 at 10:06 AM Kendrick Kay notifications@github.com wrote:

Interesting. What is the general code mechanism that creates flatmaps? (And what type of views does it provide?)

Yes, my generic "makeimagestack.m" routine by default creates an extra row and extra column in between different image slices (at the bottom / at the right). In the case of surface data, it's just a gigantic column vector, so it adds one extra row at the end and one extra column at the right.

On Mar 25, 2019, at 11:28 AM, William F. Broderick < notifications@github.com> wrote:

Noah's put together a python script that converts the pngs to something more intelligible: https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py < https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py

From Noah describing how to use it:

I just checked a script to the MRI tools repository: BIDS/GLMdenoisePNGprocess.py It crawls the PNG files in whatever directory you give it and makes flatmap versions of them (output as *_maps.png). It deduces the subject ID from the folder names and uses your SUBJECTS_DIR to get the freesurfer subject (unless there is a freesurfer directory in derivatives, in which case it uses that). It ends up that there is not only an extra row in the images but also one extra column. My assumption is that the last colum is the one that should be ignored (rather than the first column, since the last row is the one that's ignored also), and this appears correct. I went ahead and ran it on wlsubj001 and wlsubj062, so you can check those figure directories for the outputs. Syntax is: python MRI_tools/BIDS/GLMdenoisePNGprocess.py

In order to run it, you'll need to have neuropythy installed along with numpy and scipy (which are pretty standard). pip install neuropythy should be sufficient).

You'll need neuropythy version 0.9.0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-476275538>, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACmhkuAf0c5KLFc4i22ZYkkMIpL2oDKXks5vaPlBgaJpZM4Um0ng .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kendrickkay_GLMdenoise_issues_6-23issuecomment-2D478593759&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=3XYvnD3bOPbDt8Tp_7zuZXFc6YtbYf6S-fDRhnBT8gg&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ACBX3i-5FOQD5SjAyoZMpKHpMfuxJBVMJYks5vchJogaJpZM4Um0ng&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=2giNxRd46xq_V2U-gCzVtisrkDmeXvVVXXvhgdPGRrA&e= .

-- Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

JWinawer commented 5 years ago

The script creates two flatmaps per hemisphere; these are orthographic projections of the posterior and anterior faces of the fsaverage-aligned spherical surface for the subject with the occipital pole at the center of the posterior map (the anterior map is just the opposite side) and V1 aligned to the positive or negative x-axis (depending on if it's LH or RH). The fsaverage-aligned spherical surface is used because it makes it easier to deduce where the occipital pole and V1 are (so all subjects get the same orientation/arrangement in the maps). This is great for visual cortex data, but less great for data in the temporal lobe or, e.g., around motor cortex. If it would be generally useful, I could probably add some flags that would change the map orientation or output a few additional ones. Cheers, -Noah

On Mon, Apr 1, 2019 at 10:09 AM Jonathan A Winawer jonathan.winawer@nyu.edu wrote:

Adding Noah to the email.

On Mon, Apr 1, 2019 at 10:06 AM Kendrick Kay notifications@github.com wrote:

Interesting. What is the general code mechanism that creates flatmaps? (And what type of views does it provide?)

Yes, my generic "makeimagestack.m" routine by default creates an extra row and extra column in between different image slices (at the bottom / at the right). In the case of surface data, it's just a gigantic column vector, so it adds one extra row at the end and one extra column at the right.

On Mar 25, 2019, at 11:28 AM, William F. Broderick < notifications@github.com> wrote:

Noah's put together a python script that converts the pngs to something more intelligible: https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py < https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py

From Noah describing how to use it:

I just checked a script to the MRI tools repository: BIDS/GLMdenoisePNGprocess.py It crawls the PNG files in whatever directory you give it and makes flatmap versions of them (output as *_maps.png). It deduces the subject ID from the folder names and uses your SUBJECTS_DIR to get the freesurfer subject (unless there is a freesurfer directory in derivatives, in which case it uses that). It ends up that there is not only an extra row in the images but also one extra column. My assumption is that the last colum is the one that should be ignored (rather than the first column, since the last row is the one that's ignored also), and this appears correct. I went ahead and ran it on wlsubj001 and wlsubj062, so you can check those figure directories for the outputs. Syntax is: python MRI_tools/BIDS/GLMdenoisePNGprocess.py

In order to run it, you'll need to have neuropythy installed along with numpy and scipy (which are pretty standard). pip install neuropythy should be sufficient).

You'll need neuropythy version 0.9.0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-476275538>, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACmhkuAf0c5KLFc4i22ZYkkMIpL2oDKXks5vaPlBgaJpZM4Um0ng .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kendrickkay_GLMdenoise_issues_6-23issuecomment-2D478593759&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=3XYvnD3bOPbDt8Tp_7zuZXFc6YtbYf6S-fDRhnBT8gg&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ACBX3i-5FOQD5SjAyoZMpKHpMfuxJBVMJYks5vchJogaJpZM4Um0ng&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=2giNxRd46xq_V2U-gCzVtisrkDmeXvVVXXvhgdPGRrA&e= .

-- Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

-- Noah C. Benson, Ph.D. Senior Research Scientist Department of Psychology New York University New York, NY 10003

kendrickkay commented 5 years ago

Interesting. The surface map generation is also Python, it sounds like? And it relies on your lab's code infrastructure stuff? Is there any action item on this?

On Apr 2, 2019, at 8:15 AM, Jonathan Winawer notifications@github.com wrote:

The script creates two flatmaps per hemisphere; these are orthographic projections of the posterior and anterior faces of the fsaverage-aligned spherical surface for the subject with the occipital pole at the center of the posterior map (the anterior map is just the opposite side) and V1 aligned to the positive or negative x-axis (depending on if it's LH or RH). The fsaverage-aligned spherical surface is used because it makes it easier to deduce where the occipital pole and V1 are (so all subjects get the same orientation/arrangement in the maps). This is great for visual cortex data, but less great for data in the temporal lobe or, e.g., around motor cortex. If it would be generally useful, I could probably add some flags that would change the map orientation or output a few additional ones. Cheers, -Noah

On Mon, Apr 1, 2019 at 10:09 AM Jonathan A Winawer jonathan.winawer@nyu.edu wrote:

Adding Noah to the email.

On Mon, Apr 1, 2019 at 10:06 AM Kendrick Kay notifications@github.com wrote:

Interesting. What is the general code mechanism that creates flatmaps? (And what type of views does it provide?)

Yes, my generic "makeimagestack.m" routine by default creates an extra row and extra column in between different image slices (at the bottom / at the right). In the case of surface data, it's just a gigantic column vector, so it adds one extra row at the end and one extra column at the right.

On Mar 25, 2019, at 11:28 AM, William F. Broderick < notifications@github.com> wrote:

Noah's put together a python script that converts the pngs to something more intelligible: https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py < https://github.com/WinawerLab/MRI_tools/blob/master/BIDS/GLMdenoisePNGprocess.py

From Noah describing how to use it:

I just checked a script to the MRI tools repository: BIDS/GLMdenoisePNGprocess.py It crawls the PNG files in whatever directory you give it and makes flatmap versions of them (output as *_maps.png). It deduces the subject ID from the folder names and uses your SUBJECTS_DIR to get the freesurfer subject (unless there is a freesurfer directory in derivatives, in which case it uses that). It ends up that there is not only an extra row in the images but also one extra column. My assumption is that the last colum is the one that should be ignored (rather than the first column, since the last row is the one that's ignored also), and this appears correct. I went ahead and ran it on wlsubj001 and wlsubj062, so you can check those figure directories for the outputs. Syntax is: python MRI_tools/BIDS/GLMdenoisePNGprocess.py

In order to run it, you'll need to have neuropythy installed along with numpy and scipy (which are pretty standard). pip install neuropythy should be sufficient).

You'll need neuropythy version 0.9.0

— You are receiving this because you commented. Reply to this email directly, view it on GitHub < https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-476275538>, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACmhkuAf0c5KLFc4i22ZYkkMIpL2oDKXks5vaPlBgaJpZM4Um0ng .

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_kendrickkay_GLMdenoise_issues_6-23issuecomment-2D478593759&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=3XYvnD3bOPbDt8Tp_7zuZXFc6YtbYf6S-fDRhnBT8gg&e=, or mute the thread https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_ACBX3i-5FOQD5SjAyoZMpKHpMfuxJBVMJYks5vchJogaJpZM4Um0ng&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=W4u2U_DhoNoJOD0UdsYQiqH56Bg3zFfOtNa0RCuuEIs&m=gtldYajCgpI1JhgI3Vw_0BaFE5wYKUuSjQsr8x1m4b4&s=2giNxRd46xq_V2U-gCzVtisrkDmeXvVVXXvhgdPGRrA&e= .

-- Jonathan Winawer Assistant Professor of Psychology and Neural Science

New York University 6 Washington Place New York, NY, 10003 (212) 998-7922 (phone) (212) 995-4018 (fax) jonathan.winawer@nyu.edu http://psych.nyu.edu/winawer/

-- Noah C. Benson, Ph.D. Senior Research Scientist Department of Psychology New York University New York, NY 10003 — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kendrickkay/GLMdenoise/issues/6#issuecomment-478990200, or mute the thread https://github.com/notifications/unsubscribe-auth/ACmhkhMRa2fhpz5cNWvPx3cfy6BYNc3Mks5vc1f4gaJpZM4Um0ng.