PyTorch implementation of CompenNet. Also see CompenNet++ and journal version.
Highlights:
For more info please refer to our CVPR'19 paper and supplementary material.
Clone this repo:
git clone https://github.com/BingyaoHuang/CompenNet
cd CompenNet
Install required packages by typing
pip install -r requirements.txt
Download CompenNet benchmark dataset and extract to CompenNet/data
Start visdom by typing
visdom
Once visdom is successfully started, visit http://localhost:8097
Open main.py
and set which GPUs to use. An example is shown below, we use GPU 0, 2 and 3 to train the model.
os.environ['CUDA_VISIBLE_DEVICES'] = '0, 2, 3'
device_ids = [0, 1, 2]
Run main.py
to start training and testing
cd src/python
python main.py
The training and validation results are updated in the browser during training. An example is shown below, where the 1st figure shows the training and validation loss, rmse and ssim curves. The 2nd and 3rd montage figures are the training and validation pictures, respectively. In each montage figure, the 1st rows are the camera captured uncompensated images, the 2nd rows are CompenNet predicted projector input images and the 3rd rows are ground truth of projector input image.
CompenNet/data/train
and CompenNet/data/test
. Note that in our setup, we stretched the image to cover the entire projection area (keep aspect ratio).CompenNet/data/ref/img_gray
.H
between camera and projector image, then warp the camera captured images train
, test
and img_gray
to projector's view using H
. CompenNet/data/light[n]/pos[m]/warp/[surface]/train
, CompenNet/data/light[n]/pos[m]/[surface]/warp/test
and CompenNet/data/light[n]/pos[m]/[surface]/warp/ref
, respectively, where [n]
and [m]
are lighting setup index and position setup index, [surface]
is projection surface's name.Note ref/img_0001.png
to ref/img_0125.png
are plain color training images used by TPS-based method, you don't need to use these images to train CompenNet.
NewModel
in CompenNetModel.py
.model_list
, e.g., model_list = ['NewModel', 'CompenNet']
in main.py
.main.py
.NewModel
and CompenNet
will be saved to log/%Y-%m-%d_%H_%M_%S.txt
and an example is shown below, where prefix uncmp_
means the similarity between uncompensated camera-captured images and ground truth of projector input images. Prefix valid_
means the similarity between CompenNet predicted projector input images and ground truth of projector input images. Refer to our CVPR'19 paper Fig. 3 for more details. data_name model_name loss_function num_train batch_size max_iters uncmp_psnr uncmp_rmse uncmp_ssim valid_psnr valid_rmse valid_ssim
light2/pos1/curves NewModel l1+ssim 500 64 1000 14.3722 0.3311 0.5693 23.2068 0.1197 0.7974
light2/pos1/curves CompenNet l1+ssim 500 64 1000 14.3722 0.3311 0.5693 23.1205 0.1209 0.7943
light2/pos1/squares NewModel l1+ssim 500 64 1000 10.7664 0.5015 0.5137 20.1589 0.1701 0.7032
light2/pos1/squares CompenNet l1+ssim 500 64 1000 10.7664 0.5015 0.5137 20.1673 0.1699 0.7045
light1/pos1/stripes NewModel l1+ssim 500 64 1000 15.1421 0.3030 0.6264 24.9872 0.0975 0.8519
light1/pos1/stripes CompenNet l1+ssim 500 64 1000 15.1421 0.3030 0.6264 24.9245 0.0983 0.8508
light2/pos2/lavender NewModel l1+ssim 500 64 1000 13.1573 0.3808 0.5665 22.2718 0.1333 0.7723
light2/pos2/lavender CompenNet l1+ssim 500 64 1000 13.1573 0.3808 0.5665 22.1861 0.1347 0.7693
@inproceedings{huang2019compennet,
author = {Huang, Bingyao and Ling, Haibin},
title = {End-To-End Projector Photometric Compensation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
The PyTorch implementation of SSIM loss is modified from Po-Hsun-Su/pytorch-ssim. We thank the anonymous reviewers for valuable and inspiring comments and suggestions. We thank the authors of the colorful textured sampling images.
This software is freely available for non-profit non-commercial use, and may be redistributed under the conditions in license.