uclaopt / Provable_Plug_and_Play

[ICML 2019] Plug-and-Play Methods Provably Converge with Properly Trained Denoisers
66 stars 18 forks source link

About the convergence #4

Open XuVV opened 3 years ago

XuVV commented 3 years ago

Hi, really amazing work.
I have some questions about the convergence:

  1. Why set sigma = 0 in BN layer:layers.append(bn_layer(features, 0.0)) # bn layer. Is it because that BN is not recommended for SN? if so, how to ensure the denoiser satisfies a certain Lipschitz condition after adding the BN layer?
  2. Is convergence only related to network weight?Since during the ADMM process, the inputs to the projection network do not usually follow a Gaussian distribution, if I change the noise type and the loss function (choose L1 rather than L2) during the training process, can convergence be guaranteed?

Many thanks. Looking forward to your reply!

liujl11git commented 3 years ago

Hi,

With Batch Normalization, the neural network is not guaranteed to satisfy the Lipschitz condition, but it usually has better performance. In our paper, we name a CNN without BN as "SimpleCNN", with BN as "DnCNN." If a simple CNN is trained by the "real SN" in our paper, it is guaranteed to satisfy the condition given any loss function (L1 or L2). The theories motivate why we design real SN. With some practical techniques beyond the theory (eg. BN), real SN gets better performance.

XuVV commented 3 years ago

Thanks for your reply! The paper "How Does Batch Normalization Help Optimization?" shows that the batch-normalized landscape exhibits a better Lipschitz constant. Do you think that's why RealSN-DnCNN has a better performance compared to the SimpleCNN?

liujl11git commented 3 years ago

The batch normalization layer is an empirical stuff to me when we wrote the paper. The paper you pointed out is interesting and may provide theoretical explanations to BN in PnP settings.