Closed Chuyun-Shen closed 2 months ago
This might be caused by an out of memory error. Can you try to run:
mrsegmentator --input "$data_path" --outdir "$output_path" --split_level 2
Works for me.
However, when I use --split_level 2
, it seems to set the batch size to 1. I've checked my memory usage and GPU memory usage, and neither is maxed out. Is there a way to see specific error outputs? Also, do you know of any faster inference methods, as it currently takes too long to process a single image when I run mrsegmentator --input "$data_path" --outdir "$output_path" --split_level 2
Yes, split_level and batch_size are mutually exclusive. The purpose of split_level is to reduce memory usage and it only should be used if a batch of 1 is too large, in any case. If your memory is not maxed out you could try to set the split_level to 1, which requires approx. double the memory. Also, if you use Slurm it will help to increase the number of workers, as nnUNet is heavily bottlenecked by CPU-based pre- and post-processing.
You can slightly increase runtime by choosing a single fold instead of the ensemble classification by specifying --fold 0
. On my system the runtime benefit is not that large so personally I prefer to use the standard ensemble configuration to get that last bit of accuracy.
(If you are experienced with nnUNet you could also play around with the --nproc
and --nproc_export
options. That said, the standard configuration worked best on my system)
Thanks for your detailed response.
I have a directory path
data_path
with some nifti file in it, and usemrsegmentator --input "$data_path" --outdir "$output_path"
After reading 8 images, and predicting them. it raise an error as follow:And, if I assign fold model with --fold 1, it will stack after predicting. showing:
Done with image of shape torch.Size([1, 805, 233, 333]):