Precomputing training masks can be time-consuming, especially for poisson undersampling masks. There is ongoing discussion about some performance bugs around poisson disc generation (see https://github.com/ad12/meddlr/issues/3#issue-1046033159), but a hotfix is needed to make sure users are not hitting bottlenecks for precomputing training masks
This PR has two main findings and corresponding fixes:
The sigpy backend in meddlr.data.transforms.subsample.PoissonDiscMaskFunc considerably speeds up performance. But this is only for the skm-tea dataset, not other datasets supported by meddlr
The mask generation process can be parallelized when precomputing training and validation masks
Changelist
[x] Make sigpy.mri.poisson the backend for PoissonDiscMaskFunc in skm-tea
[x] Add boolean config parameter AUG_TRAIN.UNDERSAMPLE.PRECOMPUTE.USE_MULTIPROCESSING to toggle if multiprocessing should be used to generate maps
Usage
This change will impact new users, or users who have not previously computed training masks, the most. By default, training masks are precomputed when starting training and if an existing cache file is not found.
Below is an example of how speed up precomputing training masks using the config for the 6x E1 unrolled network (see MODEL_ZOO)
# 1. You may need to delete existing precomputed mask files.
# Navigate to your meddlr cache directory and delete the precomputed-masks folder
cd ~/.cache/meddlr/skm-tea
rm -rf precomputed-masks
# 2. Run training with multiprocessing on
python tools/train_net.py \
--config-file download://https://drive.google.com/file/d/1k6-2J5pRP_C31fVPZNRabKInJyy968Sg/view?usp=sharing \
AUG_TRAIN.UNDERSAMPLE.PRECOMPUTE.USE_MULTIPROCESSING True
Precomputing training masks can be time-consuming, especially for poisson undersampling masks. There is ongoing discussion about some performance bugs around poisson disc generation (see https://github.com/ad12/meddlr/issues/3#issue-1046033159), but a hotfix is needed to make sure users are not hitting bottlenecks for precomputing training masks
This PR has two main findings and corresponding fixes:
sigpy
backend inmeddlr.data.transforms.subsample.PoissonDiscMaskFunc
considerably speeds up performance. But this is only for the skm-tea dataset, not other datasets supported by meddlrChangelist
sigpy.mri.poisson
the backend forPoissonDiscMaskFunc
in skm-teaAUG_TRAIN.UNDERSAMPLE.PRECOMPUTE.USE_MULTIPROCESSING
to toggle if multiprocessing should be used to generate mapsUsage
This change will impact new users, or users who have not previously computed training masks, the most. By default, training masks are precomputed when starting training and if an existing cache file is not found.
Below is an example of how speed up precomputing training masks using the config for the 6x E1 unrolled network (see MODEL_ZOO)