Ethan-Tseng / Neural_Nano-Optics

Repository for "Neural Nano-Optics for High-quality Thin Lens Imaging"
Boost Software License 1.0
73 stars 30 forks source link

Some questions regarding hyperparameters #5

Open benhenryL opened 10 months ago

benhenryL commented 10 months ago

Hi teams, thanks for sharing your brilliant work!

I have some problems with hyperparameters and hope you can save my day.

  1. What is the total iteration? By default, args.steps is defined as 1 billion, but I think it cannot be done in 18 hours. Or is there any early stopping implemented?
  2. What are lambda for loss formulation? In other issue, you recommended to find best lambdas, but I wonder which lambdas can produce reported PSNRs in the paper.
  3. And when I clone the code and train the model, the codes run well without any errors but the model is not trained well. I think this is because some settings are different from the paper (i.e snr is not optimized, phase is initialized as log-asphere like, not zeros, phase_iter is 0 and G_iter is 10 ...). Are those default settings intended but am I doing something wrong? Actually I changed some settings and try to simulate the paper-like setting, but I still fail to train the model.

Thanks!!

LumaFilter commented 9 months ago

I can't train normally either. I've followed the configuration outlined in the supplementary materials: PHASE_ITERS=10, G_ITERS=100, METASURFACE=zeros, and used the INRIA Holiday dataset, but the phase parameters can't be trained well as the supplementary materials show. And if I set PHASE_ITERS=0, METASURFACE=neural,i.e. just training deconv network only, it can be trained well.