Closed JulienMaufront closed 1 year ago
Hi Julien,
I believe there is nothing wrong with that per se. The amount of memory used in the GPU will depend mainly on the size of subvolumes extracted and the batch size, as defined in the data extraction and training JSON files respectively.
Best wishes, Ricardo
Hi Ricardo,
Thank you for this answer. Then does it mean I could optimized the chosen values for the sizes of "subvolumes" and "batch" to take full advantage of GPU power ?
Best regards, Julien
Hi Julien,
The parameters most important in this regard are patch_shape
and batch_size
, while num_slices
influences how long the training takes but not how much memory is used. You can try optimizing these parameters, but it still does not mean it will ever fully utilize your GPU memory. Perhaps a more efficient way of running cryo-CARE is to run several independent jobs on the same GPU simultaneously (similarly to oversubscribing GPUs in RELION), but I never tried that, to be honest.
I would personally not bother how much memory it takes on the GPU, but would rather optimize training parameters based on 1) how good the results are and 2) how fast does it run.
Best wishes, Ricardo
Thank you Ricardo for all of these clues !
Best regards, Julien
Hi cryocare people,
I launched a training session with my dataset and when I check the GPU usage it seems only 10Gb out of 24Gb of my GPU is being used. Is it normal, or maybe I could force a more intensive usage ? In case something could be done, is there a parameter when I launch cryocare or could it be related to my setup ?
Thank you for your help