The project is the official implementation of our BMVC 2020 paper, "Towards Fast and Light-Weight Restoration of Dark Images"
— Mohit Lamba, Atul Balaji, Kaushik Mitra
A single PDF of the paper and the supplementary is available at arXiv.org.
In this work we propose a deep neural network, called LLPackNet
, that can restore very High Definition 2848×4256
extremely dark night-time images, in just 3 seconds even on a CPU. This is achieved with 2−7× fewer
model parameters, 2−3× lower
memory utilization, 5−20×
speed up and yet maintain a competitive image reconstruction quality compared to the state-of-the-art algorithms.
Watch the below video for results and overview of LLPackNet.
The psuedo algorithm to perform Pack/UnPack
operations is shown below.
In regard to the above algorithm, a naive implementation of the UnPack
operation for α = 8, H = 2848 and W = 4256 can be achieved as follows,
iHR = torch.zeros(1,3,H,W, dtype=torch.float).to(self.device) counttt=0 for ii in range(8): for jj in range(8): iHR[:,:,ii:H:8,jj:W:8] = iLR[:,counttt:counttt+3,:,:] counttt=counttt+3
However the above code is computationally slow and in PyTorch can be quickly implemented using the following vectorised code,
iLR.reshape(-1,8,3,H/8,W/8).permute(2,3,0,4,1).reshape(1,3,H,W)
If you find any information provided here useful please cite us,
@article{lamba2020LLPackNet, title={Towards Fast and Light-Weight Restoration of Dark Images}, author={Lamba, Mohit and Balaji, Atul and Mitra, Kaushik}, journal={arXiv preprint arXiv:2011.14133}, year={2020} }
You first need to download the SID dataset and install Rawpy for processing RAW images.
Now execute inference.py which will load images, save the enhanced images in a new directory and create inference.txt reporting average PSNR and SSIM. Prefer MATLAB over Python for PSNR and SSIM calculations.
The project was initially tested for PyTorch 1.3.1
. But with introduction of new commads in the latest PyTorch release we have further squashed more for loops
with vector commads giving even further speedup. Moreover we discovered that using iLR.reshape(-1,8,3,H/8,W/8).permute(2,3,0,4,1).reshape(1,3,H,W)
to mimic UnPack 8x
operations causes training instabilities with undersaturated black regions. The training thus required logging weights after every few epochs and fine-grain scheduling. The main reason for this instability is the incorrect interation between channel and batch dimension in the above vector operation. We thus recommend using iLR.reshape(-1,8,8,3,H//8,W//8).permute(0,3,4,1,5,2).reshape(-1,3,H,W)
instead. The overall effect with this update is that the training becomes much more stable, all underexposed regions vanish, parameter count remains same while offering additional speedup.