Closed benmuhlmann closed 4 years ago
Hi Ben,
after having a look at your data, multiple things need to be pointed out.
Fig.1: left, image 251; middle: zoom into some area of image 251. Data is not subject to much pixel noises and looks rather nice! right: the second image you refer to. It is very different, actually not showing much at all, certainly not structures similar to the ones seen in image 251. Histograms show how pixel intensities in image 251 (left) and the second image (right) are distributed. Note the massive difference!!!
Anyways: here my main points for you:
I hope this helps a bit and maybe motivate you to have a look into our papers again. I'm sure N2V could be helpful for you, but you will need to train on images that are subject to pixel independent noises (e.g. shot noise) and then apply either to the same image or images that are similar in nature, not just some other images coming from the same microscope.
Best, Florian
Hi all, I'm running into some strange results when making predictions on images not used in training. Both images were produced from the same microscope setup, so the assumption is noise should be similar.
I've included python files used to train and predict, based on the 2D RBG examples given in the n2v repo.
image 251 was cropped then used to train the u-net. Then this model was used to make predictions for image 251, and another 'very noisy' image which is mostly out of focus.
The prediction on image 251 seems to have the effect of reducing some noise. The prediction on the 'very noisy image' seems to change the image's entire color map. This is the main isssue
https://drive.google.com/drive/folders/1h2C4qWxS7g0NeaiM6-Guy69zEbue-qhr?usp=sharing