Closed jumutc closed 3 years ago
Hey @jumutc,
MIScnn does not automatically resizes the images if they are smaller than the desired patch shape.
Problem:
Side node:
If running patchwise-crop training, there could be no exceptions thrown due to the random cropping of batchgenerators will automatically pad an image if it is smaller than patch size.
https://github.com/MIC-DKFZ/batchgenerators/blob/master/batchgenerators/augmentations/crop_and_pad_augmentations.py
Debugging:
Please try to output your image sizes (after preprocessing) and verify if they are all bigger or equal than your patch size.
Fast Solution:
Cheers, Dominik
@muellerdo all images are much larger than 320x320 and the sizes are >1k for each side. Can it be that because of different sizes of different images we have a problem in batchgenerators
? I also noticed that in batches
folder everything is generated for predictions for being used so something is wrong in data_generator.py
most probably.
I tried to debug this case and the problem indeed is in that class:
from miscnn.neural_network.data_generator import DataGenerator
dataGen = DataGenerator(sample_list[:1], model.preprocessor,
training=False, validation=False,
shuffle=False, iterations=None)
print(len(dataGen))
print(dataGen[0])
with output being:
0
[[[[-2.0725093 -2.0371544 -2.0371544 ]
[-2.0017996 -1.9310896 -2.0017996 ]
[-2.1432192 -2.0017996 -2.3907037 ]
...
[-0.7643772 -0.870442 -1.1532813 ]
[-0.9765067 -0.870442 -1.1179265 ]
[-0.9411518 -1.0118617 -1.0825715 ]]]]
The pipeline looks like this:
Original images are of much bigger size of course but when I do predict
model.predict(sample_list[0:4])
:I have checked both analysis regimes and the
ValueError
appears in both.