yu4u / noise2noise

An unofficial and partial Keras implementation of "Noise2Noise: Learning Image Restoration without Clean Data"
MIT License
1.07k stars 237 forks source link

Correct my understanding if I'm wrong but why do I feel that this is very low quality research? #28

Open ackbar03 opened 5 years ago

ackbar03 commented 5 years ago

Hi,

To be fair its not really an issue with your code, but why do I feel this is very low quality research?? They are essentially saying if you don't have the clean photo for training and have different noisy versions BASED ON the clean photo, you can get similar results. Is that not common sense??? To me first of all:

1) They basically discovered the power of averaging big whoop. 2) in how many real world scenarios is this actually useful?? When will you have a number of different noisy photos that happen to be based on the same CLEAN photo but with juts different distributions of noises overlaid on top of it. 3) If you take a step back and use common sense, the noisy images must be generated from the SAME clean photo. So of course the information in the clean photo is already in there!! You've just artificially degraded the information on purpose, wtf is the point of that in the first place?

As for the claim that the intention is to train for images specific to the noise of the device, fine, that is definitely doable but exactly how much value does that add? And actually I even doubt if its practical cause you need to take multiple images of the exactly same "ground truth" for it to work, you must have a seriously crappy device if that is your intention.

I know this might not be the right place to share my comments but I was seriously disappointed in this piece of research and I don't know if anyone else feels the same or if I actually missed out something important.

yu4u commented 5 years ago

I think reddit is the right place to post such discussion:

https://www.reddit.com/r/MachineLearning/comments/8xsk0p/r_noise2noise_learning_image_restoration_without/

ackbar03 commented 5 years ago

its been archived, can't comment on it anymore.

Again I could be wrong but I think this is a bit concerning. I've noticed quite a few subpar research pieces in deep learning now, somebody really should start policing these research pieces a bit more. I think the general public also needs to think a bit critically about publications

dylan-plummer commented 5 years ago

I actually have a particularly relevant problem that I am attempting to apply this method to. Without revealing too many details, I am working with biological data that we are attempting to denoise. Performing the experiments to collect the data produces very noisy results but we can repeat the experiment to obtain a biological replicate of the true signal with different noise. This method seems promising to use these replicates to obtain a biologically useful denoised result

victorca25 commented 5 years ago

Besides @dylan-plummer 's awesome use case, you should watch the video explanation for the motivation of this research. In this case, noise is being generated artificially, due to the lack of real images to form the dataset. You do not need the clean image for this to work, it's used with loads of low exposure photographs of the same object (i.e. space photography) and the underlying explanation of why does this work is that on average the pixels are correctly captured instead of the noise. Put it simply, in some cases it's cheaper (and in medical situations, less risky for the subject) to take multiple noisy images than one single noise-free image. The research is good if you understand it.

QiangZhangCV commented 1 year ago

I actually have a particularly relevant problem that I am attempting to apply this method to. Without revealing too many details, I am working with biological data that we are attempting to denoise. Performing the experiments to collect the data produces very noisy results but we can repeat the experiment to obtain a biological replicate of the true signal with different noise. This method seems promising to use these replicates to obtain a biologically useful denoised result

Thanks for sharing your experience on biological data. During the training process, the two noisy images acting as input and target are obtained by adding separate noises on a clean image. However, in many application scenarios, especially the medical image area, clean images can be hardly obtained. Hence, I wonder how to obtain two noisy samples for the same clean image, under this condition. Could you share more infos on this point?