TanGeeGo / DegradationTransfer

This is the official Pytorch implementation of "Extreme-Quality Computational Imaging via Degradation Framework" (ICCV 2021)
https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Extreme-Quality_Computational_Imaging_via_Degradation_Framework_ICCV_2021_paper.html
19 stars 7 forks source link

DegradationTransfer & FoV-KPN (ICCV, 2021)

by Shiqi Chen, Keming Gao, Huajun Feng, Zhihai Xu, Yueting Chen

This is the official Pytorch implementation of "Extreme-Quality Computational Imaging via Degradation Framework" [Paper]

Due to some irresistible reasons on commercial, we cannot make the trained model and training data public. We are deeply sorry for this.

Prerequisites

The Deformable ConvNets V2 (DCNv2) module in our code adopts Xujiarui's Implementation. We recommand you recompile the code according to your machine and python environment as follows:

cd ~/dcn
python setup.py develop

This may cause many issues, please open Issues and ask me if you have any problems!

Data Acquisition

1. prepare your camera and experimental devices as follows:

2. Calibrate the environment illuminance

python data_capture.py -n 1 -t 1.5
adb pull ~/DCIM/Camera/whiteboard.dng ~/whiteboard
dcraw -v -4 -T -w -n 300 -q 3 -o 0 ~/whiteboard/whiteboard.dng
python env_illuminance.py -i ~/whiteboard/whiteboard.tiff -o ~/env_illu.mat -p 100

3. Checkerboard capture and postprocessing

python data_capture.py -n 7 -t 1.5
adb pull -r ~/DCIM/Camera/*.dng ~/rawdata
python post_processing.py -i ~/rawdata -n 7 -e ~/env_illu.mat -d 1.0

The 16-bit image is saved in the same directory of rawdata, named with "*_out.tiff"

Backward Transfer

1. Obtain the checkerboard post-processed by the procedure in data acquisition:

2. Generate the ideal patch from the real checkerboard

>>> patch_generator.m

3. Generate the ideal patch from the real checkerboard

image image

Degradation Transfer

1. After prepare the paired patches in "../backward_transfer/data/input" and "../backward_transfer/data/label", the training can be performed by:

# CUDA_VISIBLE_DEVICES=0 python train.py -d ../backward_transfer/data/ -o ~/output/ --region 0.0 0.5 0.0 0.5 --white_balance 1.938645 1.000000 1.889194
# CUDA_VISIBLE_DEVICES=1 python train.py -d ../backward_transfer/data/ -o ~/output/ --region 0.0 0.5 0.5 1.0 --white_balance 1.938645 1.000000 1.889194
# CUDA_VISIBLE_DEVICES=2 python train.py -d ../backward_transfer/data/ -o ~/output/ --region 0.5 1.0 0.0 0.5 --white_balance 1.938645 1.000000 1.889194
# CUDA_VISIBLE_DEVICES=3 python train.py -d ../backward_transfer/data/ -o ~/output/ --region 0.5 1.0 0.5 1.0 --white_balance 1.938645 1.000000 1.889194

2. The predicted PSFs are saved by training, collect the test PSFs of each FoV.

python kernel_sort.py -d ~/ -o ~/kernel/

3. Use the PSFs of different FoVs to generate data pairs.

python data_generator.py

Note that the image path in the "data_generator.py" needs to be changed, such as the label image path, the output image path, and the PSFs path:

# input image path
# label8bit_dir = '~/train_datasets/label_8bit'
label8bit_dir = '~/valid_datasets/label_8bit'
# label raw path
# labelraw_dir = '~/train_datasets/label_rgb'
labelraw_dir = '~/valid_datasets/label_rgb'
create_dir(labelraw_dir)
# output image path
# inputraw_dir = '~/train_datasets/input_rgb'
inputraw_dir = '~/valid_datasets/input_rgb'
create_dir(inputraw_dir)
# kernel path
kernel_path = '~/kernel/kernel.mat'

FoV-KPN

1. prepare the dataset of your camera by:

python dataset_generator.py

Note that the path information in this file needs update to the path of your computer:

date_ind = "20220329" # date information for h5py file
dataset_type = "valid" # type of dataset "train" or "valid"
camera_idx = "camera04" # index of camera "camera01" to "camera05" 
base_path = "/hdd4T_2/Aberration2021/synthetic_datasets" # system path 
input_dir = "input_rgb_20220329" # input data dir
label_dir = "label_rgb" # label data dir
if_mask = False # whether add mask
# split FoV for dataset generation
# splited_fov = [0.0, 0.3, 0.6, 0.9, 1.0]
splited_fov = [0.0, 1.0]

2. Check the option file information

Note: The training information and the test information are in the same option.py file!

3. Training the FoV-KPN

python train.py

4. Test on the actual photographs of your camera

python test_real.py

Citation

If you find the code helpful in your research or work, please cite the following papers.

@InProceedings{Chen_2021_ICCV,
    author    = {Chen, Shiqi and Feng, Huajun and Gao, Keming and Xu, Zhihai and Chen, Yueting},
    title     = {Extreme-Quality Computational Imaging via Degradation Framework},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {2632-2641}
}