Project-MONAI / research-contributions

Implementations of recent research prototypes/demonstrations using MONAI.
https://monai.io/
Apache License 2.0
1.02k stars 334 forks source link

Question about the input shape of UNETR #191

Open soribido opened 1 year ago

soribido commented 1 year ago

Dear Authors,

I have a question about the structure of UNETR. I am now trying to segment a 3D volume image of 512x512x400~512x512x700. Since the embedding is done in a size of 768, is it possible even if the input is variable (eg, using images of different sizes above)? Or is it necessary to cut into patches of (eg, 96,96,96) and then cut them into patches for the vision transformer structure?

tangy5 commented 1 year ago

Hi @soribido , the input image shape can be variant, e.g., different number of slices.

The size of (96x96x96) is the patch size, that mean, when training/validation/inference, it use a sub-volume as input. It doesn't need to crop ahead of running the training/valdiation/testing, the data transformation will do the crop when loading the data from dataloader.

https://github.com/Project-MONAI/research-contributions/blob/c148c436cf675e568649aa07763002cc0ab09ee8/UNETR/BTCV/utils/data_utils.py#L84

If resources allows, you can use larger patch size, but (96x96x96) is still recommended here, it's a moderate size for training CT scans.