Open TouqeerAhmad opened 6 years ago
Hi Touqeer,
Developing a generalized MLP is a good direction of research, but our paper mainly focused on introducing a new noise model.
We do not have experimental results of this. But, noise level is different according to camera model and its specific settings (especially ISO). Thus, the performance would decrease.
You are right. We trained the network for each camera/settings to achieve the best performance, but I agree that training one generalized network is desirable practically.
Yes, the network would be more deeper and complex if we want to train a generalized network.
Thanks, Seonghyeon
Thank you Seonghyeon!
Hi Seonghyeon,
I have another question -- it is regarding to the choice of Bayesian Non-Local Means? I was curious as to why you have chosen this image denoising method among all others -- and even some recent ones better than BNLM.
Is it due to the fact that this method fits better to the way noise Modeling is done i.e. 3x3 noise covariance at each pixel, because other methods are patch based e.g. BM3D/Noise Clinic/NL-Bayes etc. -- or there is some other motivation to it?
Thanks, Touqeer
Also, I wanted to make sure that you are not sharing the denoising part of the code, only the noise modeling part. Essentially the code related to section 5.2 in the paper is not shared?
In that case, can you please let me know if you will be sharing the code for denoising part?
Or otherwise how many patches would be used to compute equation 6/7. I understand you have mentioned the patch size to be 5x5 and window of 35x35 but the number of patches is not mentioned.
Thank you!
It was mainly the difficulty of reformulation of algorithms based on 3x3 RGB covariance model, and availability of source codes at that time. Reformulating the block matching part in BM3D needs a heavy math, thus it is beyond our scope.
Because what we did is just changing the formulation of dissimilarity of BNLM as described in Eq. 6 and 7. We did not change much from the original open source implementation except the dissimilarity, so we did not share our source code. But, if you really need it, I will share it personally.
I do not remember exactly, but probably 5x5 overlapping patches for each center pixel in a 35x35 window were used for computation.
Hello, I’m working on the code. It’s amazing, could you share the denoising part of the code? @woozzu
@shzxd Please send me an email via shnnam@yonsei.ac.kr
Okay, thank you
Hi Seonghyeon,
I have few questions about the paper titled 'A Holistic Approach to Cross-Channel Image Noise Modeling and its Application to Image Denoising'.
If I understood the paper correctly, it looks like a separate MLP is trained for each different camera/camera settings. Here are my concerns:
1) I was curious if a model trained for one specific camera e.g. Nikon D800 (with ISO 1600) would perform/generalize on test images which are not necessarily captured by the same camera? Do you have any experiments answering this question ? -- I understand the space in paper is limited and you can not just put everything in there.
2) Also, why not train a more generalized network for noise model estimation using patches from all different camera/camera settings? -- In real world test images we are not necessarily going to know which camera or camera settings have been used to capture an image.
3) Probably the network will not stay shallow anymore if it is trained to model noise profiles across various cameras/camera settings?
Looking forward to hear back from you.
Thanks, Touqeer