neurostuff / PyMARE

PyMARE: Python Meta-Analysis & Regression Engine
https://pymare.readthedocs.io
MIT License
54 stars 14 forks source link

Use masked arrays in computations #44

Open tyarkoni opened 4 years ago

tyarkoni commented 4 years ago

Related to #9, we should add support for masked arrays wherever possible—this will allow vectorized estimation even when the studies in parallel datasets differ (i.e., users pass in NaN values in different studies for different datasets).

tsalo commented 4 years ago

I just want to keep track of the things I've noticed that need to be changed/account for in this:

tyarkoni commented 4 years ago

Thanks, this list is helpful. For most if not all of the above, working with masked operations shouldn't be too hard. E.g., while einsum won't natively do any masking, I think we can just pass a masking array as one of the operands, and multiplying by the mask in the summation will then produce the desired result.

That said, if it does look like working with masked arrays is going to require major changes, we might have to bite the bullet and just return NaN for any voxels that have missing values. But hopefully it won't come to that.

tyarkoni commented 3 years ago

I was wrong, it's not straightforward. Will leave open, but doubt I'll be able to work on it.

HippocampusGirl commented 2 years ago

An alternative to using masked arrays would be to call the statistics code separately for each voxel, filtering the input matrices to remove missing data. I have a working example at https://github.com/HALFpipe/HALFpipe/blob/main/halfpipe/stats/fit.py.

tsalo commented 2 years ago

I think the problem with looping across voxels would be that the estimation is vectorized, so it can work across many voxels at the same time. I think switching to looping would slow things down, unless we divided the data into groups of voxels, based on patterns of missing data, and looped across those groups.