Open jumutc opened 4 years ago
Bug randomly reproduced in version 1.0.3
as well
Hey @jumutc,
sorry for the late reply.
Can you elaborate & describe the bug a little bit more.
It is interesting that executing this predict the second time everything works!
This is quite odd.
Are you using a patchwise strategy with an overlap between patches? Something like 'pp.patchwise_overlap = (80,80,40)'.
And if you are using fullimage, I assume you are analysing 2D images, right? Did you ensure via e.g. resizing that all images have the same shape?
Cheers, Dominik
Hi @muellerdo,
The dataset is 2D images and the pipeline looks like this:
# Create a pixel value normalization Subfunction for z-score scaling
sf_zscore = Normalization(mode="z-score")
# Create a resizing Subfunction to shape 592x592
sf_resize = Resize((592, 592))
# sf_resize = Resize((768, 768))
# Assemble Subfunction classes into a list
sf = [sf_resize, sf_zscore]
# Configure data augmentation part
aug = Data_Augmentation(cycles=3, scaling=False, rotations=False,
elastic_deform=False, mirror=False, brightness=True,
contrast=True, gamma=True, gaussian_noise=True)
aug.config_p_per_sample = 0.35
aug.config_contrast_range = (0.1, 5)
aug.config_brightness_range = (0.1, 5)
aug.config_gaussian_noise_range = (0.01, 0.5)
# Initialize Preprocessor
pp = Preprocessor(data_io, batch_size=5, subfunctions=sf,
prepare_subfunctions=True, prepare_batches=False,
data_aug=aug, analysis="fullimage")
# Create the Neural Network model
model = Neural_Network(preprocessor=pp, loss=tversky_crossentropy,
metrics=[tversky_loss, dice_soft, dice_crossentropy],
batch_queue_size=10, workers=5, learninig_rate=2e-4,
architecture=Architecture())
...
cb_lr = LearningRateScheduler(scheduler)
cb_es = EarlyStopping(monitor='loss', mode='min', min_delta=0.0001, patience=10)
cb_tb = TensorBoard(log_dir="tensorboard", histogram_freq=0, write_graph=True, write_images=True)
model.train(sample_list[2:], epochs=100, iterations=500, callbacks=[cb_lr, cb_es, cb_tb])
model.predict(sample_list[0:2])
and the problem is at the last stage on predict.
Mhm. Normally, the number of predicted batches for fullimage should be 1 because MIScnn run inferences just for one sample and there is no patches obviously.
Puh, can you try reducing the batch size to 1 via something like this:
# Training model like usual
...
model.train(...)
# Define a new Preprocessor class for prediction -> just for debugging
model.preprocessor = Preprocessor(data_io, batch_size=1, subfunctions=sf,
prepare_subfunctions=True, prepare_batches=False,
data_aug=aug, analysis="fullimage")
# Run inference
model.predict(sample_list[0:2])
And then see if its working?
Hi @muellerdo,
With batch_size
1 and 3 everything works and also sometimes with 2. And this is very strange and wonky for reproducibility bug. Cannot add anything more to it!
In the
fullimage
mode I get this on predicting a batch:It is interesting that executing this
predict
the second time everything works!