Waller-Lab / LenslessLearning

Code for Lensless Learning Paper
https://waller-lab.github.io/LenslessLearning
BSD 3-Clause "New" or "Revised" License
57 stars 15 forks source link

nan in training #4

Open zhangyingerjelly opened 1 year ago

zhangyingerjelly commented 1 year ago

since there is no code for training, I write the training code by myself using DiffuserCam Dataset, but the parameter le_admm.mu2 will get a gradient of nan during the training process, thus causing the failure in training. I’m not sure whether there are something wrong with my training code, so can you public your training code?

HkDzl commented 1 year ago

I also want to see the training code.

arpanpoudel commented 1 year ago

since there is no code for training, I write the training code by myself using DiffuserCam Dataset, but the parameter le_admm.mu2 will get a gradient of nan during the training process, thus causing the failure in training. I’m not sure whether there are something wrong with my training code, so can you public your training code?

since there is no code for training, I write the training code by myself using DiffuserCam Dataset, but the parameter le_admm.mu2 will get a gradient of nan during the training process, thus causing the failure in training. I’m not sure whether there are something wrong with my training code, so can you public your training code?

do you have your training code?

mmahjoub5 commented 1 year ago

I am also having this issue

ebezzam commented 11 months ago

Hi @zhangyingerjelly, @HkDzl, @arpanpoudel, we've released training code to reproduce this (and other features) here: https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_unrolled.py By default it uses this configuration (sorry if there's a lot going on there!) This script is part of a broader package for lensless imaging (measurement, reconstruction algorithms, simulation, and evaluation): GitHub, documentation It's a work in progress, but hope it helps and happy to support setup / get your feedback!

arpanpoudel commented 11 months ago

@ebezzam I have gone through your training recipe and saw that you had downsampled the images to 135,240 during training. Does this downsampling effect the reconstruction? considering the multiplexing property. I also saw that you had changed the image pair from BGR to RGB before training, will this affect the reconstruction?

ebezzam commented 11 months ago

@arpanpoudel thanks for looking into the code!

Yes by default we downsample by a factor 2 along each dimension, but if you set this factor to 1, you can keep the original resolution of the measurements. My impression is that the original authors already downsampled the data so that the dataset is more manageable (because the provided PSF has 4x the resolution of the measurements). But you're right, in general downsampling affects reconstruction, higher resolution measurements can allow higher resolution reconstructions but at a computational cost (larger FFTs). You can find some of our results here: Figure 5.4 compares different reconstruction approaches.

Regarding conversion from BGR to RGB, the original authors also do this but at the output. We do it before reconstruction but it doesn't make a difference as each color channel is handled independently during reconstruction.