Open femonk opened 6 years ago
Sorry for the late reply. Maybe you have found a solution in the meantime.
Anyway, I'm a bit unsure about
weighted_loss = tf.multiply(loss_map, class_weights[:, 0]+class_weights[:, 1])
Also the image_util.ImageDataProvider
is shuffling randomly through the training files. Do you account for that when computing the weight maps?
Hi,
thank you for your reply @jakeret. What do you mean by _'Anyway, I'm a bit unsure about weightedloss = [...]'?
As far as I see, the image_util.ImageDataProvider
isn't shuffling randomly through the training files, so this shouldn't the problem. Right?
Hi,
I tried to improve the segmentation results of my UNet using individual 'weightmaps' (+) during the training. These weightmaps should increase the probabilities for some pixels of the segmentation, such that narrow edges are detected / segmented in a more reliable manner. But all my attempts to implement these weightmaps into the UNet-Code weren't successfull yet. Does anyone have some suggestions?
(+) weightmaps: I call them 'weightmaps' but maybe something like 'probabilitymap-prefactor-matrix' would describe the maps intention better.
To give some details: I've some true ground segmentations with really narrow areas like the here shown 'residue' in the uppermost corner:
The idea is to 'smooth' the borders of the segmented area using a gaussian filter, such that the probabilities of the pixels next to the narrow area are slightly increased what hopefully improves the correct segmentation of these small areas. The calculation of the weightmaps is defined as a function in 'training.py':
def own_weightmap(input_img): [...]
and the weightmap for the above shown example looks like that:The weightmaps are calculated in 'training.py' using this code:
These weightmaps (in the code-snipped called 'weights') should be feed into the network right before the assignment to the respective classes is done. In the UNet architecture of Ronneberger et al. (https://arxiv.org/abs/1505.04597) this would be 'at the last turquoise arrow' right before the '1x1 conv' and 'output segmentation map':
Yet, the calculated weightmaps are feed into the network (also in training.py) by:
To improve the segmentation, the predictions are pixelwise multiplied with the respective element of the weightmap. This is done in 'unet.py' in line 220-232 ('if class_weights is not None...'):
The lines 224 & 225 (#weight_map = ...) are uncommented, since else the error 'loss=not a number' occurs.
But somehow this approach doesn't work. As a result the network is trained towards unrelevant areas as the probability map shows:
ground truth input:
probability map: (yellow: high values & blue: low values) This should normally have somehow a similarity with the calculated weightmap, since it is the 'loss_map' multiplied with the weightmap (see code above). But as you see, it doesn't look like expected.
And as a result all / the most pixels are wrongly assigned to the class 'background' (= black), such that the prediction-images are all over black or only with a small area assigned to the right class:
resulting segmentation:
Has somebody any suggestions where's or what's the problem with my code? Thanks!