MIC-DKFZ / nnUNet

Apache License 2.0
5.63k stars 1.71k forks source link

nnunet inference speed up #2504

Open xieweiyi opened 3 days ago

xieweiyi commented 3 days ago

Hi,

using the command: nnUNet_compile=True nnUNetv2_predict with options -npp 6 -nps 6. now my GPU memory is occupied only half. As far as I understod, this inference code basically runs sliding-window with overlapping tiles, meaning that it processes windows in batches. How could I increase the batch size so that the inference runs faster and my GPU is fully occupied. I tried to increase the -npp and -nps but it seems not having an affect. Am i looking into the right options to tune?

ykirchhoff commented 2 days ago

Hi @xieweiyi,

nnUNet does not use batch inference but only predicts one patch at a time. Potential speed ups from batch inference are rather small and it's rather complicated to implement for the sliding window approach. There is an open pull request #2153 to process TTA batch wise, which makes much more sense, but speed ups are also rather small and I am not sure how up-to-date that pull request is.

Best, Yannick