jnjaby / DISCNet

Code for DISCNet.
99 stars 16 forks source link

Did you release the PCA code? #9

Closed Jian-danai closed 3 years ago

Jian-danai commented 3 years ago

Hi, I was wondering how did you get the kernel code with shape (5,) from the PSF with shape (800,800,3)? Do you have the code?

And why the shape of the PSF is not the same as those of the input images which are (2448, 3264, 3)? (Or may not be necessary to be the same?)

What does the psf.expand(-1,-1,H,W) mean? Just to concat the shape(1,5,1,1) with the shape (1,3,2448, 3264)? If so why not use the original PSF with shape (1,3,800,800) to expand (with less information loss because there is no PCA operation?)?

Thanks so much!

jnjaby commented 3 years ago

Hi, you can assume the kernel space is an 800x800x3-dimensional linear space and project them onto the kernel code space via PCA to obtain the kernel code of shape (5,).

We crop the center patch (800x800) of PSF since the outer regions are flooding with readout noise and contain no information.

Using PCA for dimension reduction is common practice for image restoration with multiple degradations (e.g., SRMD[1], IKC[2]). It's been studied that learning kernel codes after PCA rather than kernels provides better feature representation and accounts for the variations of kernels [2]. Another merit is the applicability to the spatial-variant kernel as the kernel code map is on a per-pixel basis.

[1] Zhang, Kai, Wangmeng Zuo, and Lei Zhang. "Learning a single convolutional super-resolution network for multiple degradations." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [2] Gu, Jinjin, et al. "Blind super-resolution with iterative kernel correction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

Jian-danai commented 3 years ago

Hi, you can assume the kernel space is an 800x800x3-dimensional linear space and project them onto the kernel code space via PCA to obtain the kernel code of shape (5,).

We crop the center patch (800x800) of PSF since the outer regions are flooding with readout noise and contain no information.

Using PCA for dimension reduction is common practice for image restoration with multiple degradations (e.g., SRMD[1], IKC[2]). It's been studied that learning kernel codes after PCA rather than kernels provides better feature representation and accounts for the variations of kernels [2]. Another merit is the applicability to the spatial-variant kernel as the kernel code map is on a per-pixel basis.

[1] Zhang, Kai, Wangmeng Zuo, and Lei Zhang. "Learning a single convolutional super-resolution network for multiple degradations." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [2] Gu, Jinjin, et al. "Blind super-resolution with iterative kernel correction." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.

Thank you. Did you release the PCA code to transfer the (800,800,3) PSF to (5,)?

And, how does the kernel code with shape (5,) affect the later per-pixel filter generation?

jnjaby commented 3 years ago

We didn't release that part of code. I provide an example for your reference. We set k=5 as conventional in image restoration with multiple degradations and a 5-dimensional code can explain over 90% variance.

# PSF_vars, your concatenated and flatenned PSF.
from sklearn.decomposition import PCA
PSF_pca = PCA(n_components=0.99)
PSF_pca.fit(PSF_vars)

save_folder = '...'
PSF_dir = '...'
PSF_list = sorted(glob.glob(os.path.join(PSF_dir, '*.npy')))

for i in range(len(PSF_list)):
    PSF = np.load(PSF_list[i])
    name = os.path.basename(PSF_list[i]).replace('_psf_', '_code_')

    code = PSF_pca.transform(PSF.flatten()[None,...]).squeeze().astype(np.float32)
    np.save(os.path.join(save_folder, name), code)
Jian-danai commented 3 years ago

We didn't release that part of code. I provide an example for your reference. We set k=5 as conventional in image restoration with multiple degradations and a 5-dimensional code can explain over 90% variance.

# PSF_vars, your concatenated and flatenned PSF.
from sklearn.decomposition import PCA
PSF_pca = PCA(n_components=0.99)
PSF_pca.fit(PSF_vars)

save_folder = '...'
PSF_dir = '...'
PSF_list = sorted(glob.glob(os.path.join(PSF_dir, '*.npy')))

for i in range(len(PSF_list)):
    PSF = np.load(PSF_list[i])
    name = os.path.basename(PSF_list[i]).replace('_psf_', '_code_')

    code = PSF_pca.transform(PSF.flatten()[None,...]).squeeze().astype(np.float32)
    np.save(os.path.join(save_folder, name), code)

What is the 'PSF_vars'? Actually, I am still wondering how to train the PCA transformation matrix?

wstmac commented 2 years ago

We didn't release that part of code. I provide an example for your reference. We set k=5 as conventional in image restoration with multiple degradations and a 5-dimensional code can explain over 90% variance.

# PSF_vars, your concatenated and flatenned PSF.
from sklearn.decomposition import PCA
PSF_pca = PCA(n_components=0.99)
PSF_pca.fit(PSF_vars)

save_folder = '...'
PSF_dir = '...'
PSF_list = sorted(glob.glob(os.path.join(PSF_dir, '*.npy')))

for i in range(len(PSF_list)):
    PSF = np.load(PSF_list[i])
    name = os.path.basename(PSF_list[i]).replace('_psf_', '_code_')

    code = PSF_pca.transform(PSF.flatten()[None,...]).squeeze().astype(np.float32)
    np.save(os.path.join(save_folder, name), code)

What is the 'PSF_vars'? Actually, I am still wondering how to train the PCA transformation matrix?

Hi danai, have you figured out how to get all the PSF_vars? In the paper they said they can get these PSF_vars by rotationg the orignal PSF, but I don't know how to do rotation and what degree they choose to do.