Currently the entire learned template is stored: this is a n_voxels x B_train matrix. With n_voxels = 50000 and B_train = 10000 this is a ~4 Gb matrix! It is likely not useful to store all of the elements of this matrix, as only a subset of them will actually be active for the calibration part.
Suggestion: store a maximum of 1000 values for lambda instead of B_train. In order to keep the current resolution in the downstream calibration, these values should unevenly sampled from the original ones. That is, focus on the quantiles of smallers order (which are expected to be the active ones at calibration).
Note that this will not change the time needed for computing the learned template.
Currently the entire learned template is stored: this is a n_voxels x B_train matrix. With n_voxels = 50000 and B_train = 10000 this is a ~4 Gb matrix! It is likely not useful to store all of the elements of this matrix, as only a subset of them will actually be active for the calibration part.
Suggestion: store a maximum of 1000 values for lambda instead of B_train. In order to keep the current resolution in the downstream calibration, these values should unevenly sampled from the original ones. That is, focus on the quantiles of smallers order (which are expected to be the active ones at calibration).
Note that this will not change the time needed for computing the learned template.