Open soribido opened 1 year ago
Hi @soribido , the input image shape can be variant, e.g., different number of slices.
The size of (96x96x96) is the patch size, that mean, when training/validation/inference, it use a sub-volume as input. It doesn't need to crop ahead of running the training/valdiation/testing, the data transformation will do the crop when loading the data from dataloader.
If resources allows, you can use larger patch size, but (96x96x96) is still recommended here, it's a moderate size for training CT scans.
Dear Authors,
I have a question about the structure of UNETR. I am now trying to segment a 3D volume image of 512x512x400~512x512x700. Since the embedding is done in a size of 768, is it possible even if the input is variable (eg, using images of different sizes above)? Or is it necessary to cut into patches of (eg, 96,96,96) and then cut them into patches for the vision transformer structure?