sanssouci-org / sanssouci.python

Post hoc inference via multiple testing
GNU General Public License v3.0
6 stars 3 forks source link

Save memory by storing only part of a learned template #35

Closed pneuvial closed 2 years ago

pneuvial commented 2 years ago

Currently the entire learned template is stored: this is a n_voxels x B_train matrix. With n_voxels = 50000 and B_train = 10000 this is a ~4 Gb matrix! It is likely not useful to store all of the elements of this matrix, as only a subset of them will actually be active for the calibration part.

Suggestion: store a maximum of 1000 values for lambda instead of B_train. In order to keep the current resolution in the downstream calibration, these values should unevenly sampled from the original ones. That is, focus on the quantiles of smallers order (which are expected to be the active ones at calibration).

Note that this will not change the time needed for computing the learned template.

alexblnn commented 2 years ago

Possible solution: add "compressed" mode with only a percentage of curves kept

alexblnn commented 2 years ago

@pneuvial this is actually not relevant to sanssouci.python itself IMO, rather posthoc-fmri (or the equivalent for genomics etc)