Open tanguyduval opened 7 years ago
Blurry images can really degrade the CNN performance, there is a paper evaluating that [1] (the x-axis is the σ of the kernel):
It would be nice to test a blur augmentation to see if it improves. Do we have similar patches in the training/test set ?
Another approach to this is to try to plug a super-resolution network (i.e. [2]) to do deblurring on a parallel branch of the original net.
[1] https://arxiv.org/pdf/1604.04004.pdf [2] https://www.doc.ic.ac.uk/~oo2113/publications/MICCAI2016_camera_ready.pdf
In folder /Volumes/data_processing/tanguy/Histo/Human slices T4 and C2A show bad focus
To be noted: blurring as a data augmentation function has been implemented and is part of the transformations that we apply during the training.
However, I am not sure of the exact effects it had on the training itself, other than that it should improve the performance.
I would be very cautious with blur augmentation because we haven't done any validation of it as I recall. Since CNNs are VERY sensitive to blurring (see paper above), it may be detrimental to the final prediction of images that aren't blurred.
Some regions suffer from bad focus which bias the results: much fewer small axons detected. Would be nice to detect that.
Bad Focus:
On Focus: