MASILab / Synb0-DISCO

Distortion correction of diffusion weighted MRI without reverse phase-encoding scans or field-maps
https://my.vanderbilt.edu/masi
57 stars 28 forks source link

High memory usage in inference. #58

Open toomanycats opened 6 months ago

toomanycats commented 6 months ago

I've found that when running the Singularity container version of the DISCO pipeline, we needed 32 GB of memory for our Sun Grid Engine to run the pipeline.

I made a sandboxed version of the sing container and added a cache clear on a hunch. This appears to have worked. Still double checking.

def inference(T1_path, b0_d_path, model, device):
+    torch.cuda.empty_cache()
    # Eval mode
     model.eval()
toomanycats commented 6 months ago

UPDATE:

The cache clearing didn't help. Plus the call is probably wrong since this the device is not cuda. Attempting another idea, to explicitly use float16 datatype rather than what we think is the default, float32.