jnjaby / DISCNet

Code for DISCNet.
99 stars 16 forks source link

Whether the light source is positioned when eliminating the glow #18

Closed WuWanyu98 closed 2 years ago

WuWanyu98 commented 2 years ago

Hi, thank you for your innovative contributions in this work, I am wondering would you please explain training details as below.

  1. When processing synthetic data, you crop the data into 800×800 patches and then use these sub-images for training and testing. The training is mainly for the degradation of the light source part of the problem, but it should not contain light sources in every sub-image, so I wonder if there is a lot of dirty data in these sub-images that would affect the efficiency of training.
  2. In section 4.2 you emphasize that various kinds of image degradations are achieved by processing PSFs through the PCA method. Would you provide the lists of degradations modeled in the training stage?
  3. Figure 4 shows that the input is a specific PSF, while Table 1 shows that you use various PSFs. Whether the different degradations are achieved by multiple PSFs or one PSF? Could you please describe in detail how the variational degradations are implemented?
  4. PSF was previously globally used in image deblurring. However, in this work, the light source region is locally stimulated with PSF kernel code, so have you used the spatial location of the light source? And how does the PSF work directly on the light source without affecting other areas?
  5. I want to test a jpg image affected by diffraction, but a corresponding Numpy file should also be entered for testing. How can I get this Numpy file?

Best regards

WuWanyu98 commented 2 years ago

Thank you very much for your email reply!

KKKLeouee commented 2 years ago

Hi, thank you for your innovative contributions in this work, I am wondering would you please explain training details as below.

  1. When processing synthetic data, you crop the data into 800×800 patches and then use these sub-images for training and testing. The training is mainly for the degradation of the light source part of the problem, but it should not contain light sources in every sub-image, so I wonder if there is a lot of dirty data in these sub-images that would affect the efficiency of training.
  2. In section 4.2 you emphasize that various kinds of image degradations are achieved by processing PSFs through the PCA method. Would you provide the lists of degradations modeled in the training stage?
  3. Figure 4 shows that the input is a specific PSF, while Table 1 shows that you use various PSFs. Whether the different degradations are achieved by multiple PSFs or one PSF? Could you please describe in detail how the variational degradations are implemented?
  4. PSF was previously globally used in image deblurring. However, in this work, the light source region is locally stimulated with PSF kernel code, so have you used the spatial location of the light source? And how does the PSF work directly on the light source without affecting other areas?
  5. I want to test a jpg image affected by diffraction, but a corresponding Numpy file should also be entered for testing. How can I get this Numpy file?

Best regards

Hello, I am a graduate student. I saw that you are also researching DISCnet and Light effects by chance. I am also currently researching the direction of suppressing light intensity. Can you add a contact information to ask you some questions about data? Issues such as set production and code reproduction。

My email is: 956358300@qq.com

Please forgive me for any interruptions. Looking forward to your reply.