Open tyarkoni opened 4 years ago
I just want to keep track of the things I've noticed that need to be changed/account for in this:
pymare.core.Dataset
, _get_predictors()
converts X
to a DataFrame. DataFrames don't work with masked arrays. They automatically fill the missing values with nans.pymare.stats.ensure_2d()
(called by Dataset during initialization), data are converted to arrays, removing the masks from masked arrays.pymare.stats.weighted_least_squares()
, np.einsum
seems to drop masking, although I can't be sure since einsum seems to be magic.np.dot(arr1, arr2)
, the resulting array will be a masked array with no mask.arr1.dot(arr2)
, masking only seems to preserved if the first array is the one that is masked. It's weird.Thanks, this list is helpful. For most if not all of the above, working with masked operations shouldn't be too hard. E.g., while einsum won't natively do any masking, I think we can just pass a masking array as one of the operands, and multiplying by the mask in the summation will then produce the desired result.
That said, if it does look like working with masked arrays is going to require major changes, we might have to bite the bullet and just return NaN for any voxels that have missing values. But hopefully it won't come to that.
I was wrong, it's not straightforward. Will leave open, but doubt I'll be able to work on it.
An alternative to using masked arrays would be to call the statistics code separately for each voxel, filtering the input matrices to remove missing data. I have a working example at https://github.com/HALFpipe/HALFpipe/blob/main/halfpipe/stats/fit.py.
I think the problem with looping across voxels would be that the estimation is vectorized, so it can work across many voxels at the same time. I think switching to looping would slow things down, unless we divided the data into groups of voxels, based on patterns of missing data, and looped across those groups.
Related to #9, we should add support for masked arrays wherever possible—this will allow vectorized estimation even when the studies in parallel datasets differ (i.e., users pass in NaN values in different studies for different datasets).