This is an official PyTorch release of the paper "C2N: Practical Generative Noise Modeling for Real-World Denoising" from ICCV 2021.
If you find C2N useful in your research, please cite our work as follows:
@InProceedings{Jang_2021_ICCV,
author = {Jang, Geonwoon and Lee, Wooseok and Son, Sanghyun and Lee, Kyoung Mu},
title = {C2N: Practical Generative Noise Modeling for Real-World Denoising},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {2350-2359}
}
You can place any custom images in ./data
and image datasets in subdirectory ./data/[name_of_dataset]
For the SIDD and DND benchmark images, you can find them at SIDD Benchmark and [DND Benchmark](). Convert them into .png images and place them in each subdirectory.
Download following pre-trained models:
Generator | Clean | Noisy | config | Pre-trained |
---|---|---|---|---|
C2N | SIDD | SIDD | C2N_DnCNN | model |
C2N | SIDD | DND | C2N_DnCNN | model |
Denoiser | Generator | Clean | Noisy | Clean (denoiser train) | config | Pre-trained |
---|---|---|---|---|---|---|
DnCNN | C2N | SIDD | SIDD | SIDD | C2N_DnCNN | model |
DIDN | C2N | SIDD | SIDD | SIDD | C2N_DIDN | model |
DIDN | C2N | SIDD | DND | SIDD | C2N_DIDN | model |
config
: Name of the configuration.ckpt
: Name of the checkpoint to load. Choose between 'C2N-SIDD_to_SIDD' and 'C2N-DND_to_SIDD' depending on the noisy images it is trained on.mode
: 'single' or 'dataset'.data
: Filename of clean image if mode
is 'single', dataset of clean images if mode
is 'dataset'.gpu
: GPU id. Currently this demo only supports single-GPU or CPU device.Examples:
# Generate on single clean image
python test_generate.py --ckpt C2N-SIDD_to_SIDD.ckpt --mode single --data clean_ex1.png --gpu 0
python test_generate.py --ckpt C2N-DND_to_SIDD.ckpt --mode single --data clean_ex2.png --gpu 0
# Generate on clean images in a dataset
python test_generate.py --ckpt C2N-SIDD_to_SIDD.ckpt --mode dataset --data SIDD_clean_examples --gpu 0
python test_generate.py --ckpt C2N-DND_to_SIDD.ckpt --mode dataset --data SIDD_clean_examples --gpu 0
config
: Name of the configuration. Choose between 'C2N_DnCNN' and 'C2N_DIDN' depending on the denoiser to be used.ckpt
: Name of the checkpoint to load.
mode
: 'single' or 'dataset'.data
: Filename of noisy/generated image if mode
is 'single', dataset of noisy/generated images if mode
is 'dataset'.gpu
: GPU id. Currently this demo only supports single-GPU or CPU device.Examples:
# Denoise single noisy image
python test_denoise.py --config C2N_DnCNN --ckpt DnCNN-SIDD_to_SIDD-on_SIDD --mode single --data noisy_ex1_SIDD.png --gpu 0
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_SIDD-on_SIDD --mode single --data noisy_ex1_SIDD.png --gpu 0
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_DND-on_SIDD --mode single --data noisy_ex2_DND.png --gpu 0
# Denoise noisy images in a dataset
python test_denoise.py --config C2N_DnCNN --ckpt DnCNN-SIDD_to_SIDD-on_SIDD --mode dataset --data SIDD_benchmark --gpu 0
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_SIDD-on_SIDD --mode dataset --data SIDD_benchmark --gpu 0
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_DND-on_SIDD --mode dataset --data DND_benchmark --gpu 0
# Denoise the generated images from C2N
# You may copy the generated images in `results/[input_clean_data_path*]` to `data/[input_clean_data_path*]_generated.png`, for example.
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_SIDD-on_SIDD --mode single --data clean_ex1_generated.png --gpu 0
python test_denoise.py --config C2N_DIDN --ckpt DIDN-SIDD_to_DND-on_SIDD --mode single --data clean_ex2_generated.png --gpu 0