PyTorch implementation of Contextual Loss (CX) and Contextual Bilateral Loss (CoBi).
There are many image transformation tasks whose spatially aligned data is hard to capture in the wild. Pixel-to-pixel or global loss functions can NOT be directly applied such unaligned data. CX is a loss function to defeat the problem. The key idea of CX is interpreting images as sets of feature points that don't have spatial coordinates. If you want to know more about CX, please refer the original paper, repo and examples in ./doc directory.
torch
& torchvision
pip install git+https://github.com/S-aiueo32/contextual_loss_pytorch.git
You can use it like PyTorch APIs.
import torch
import contextual_loss as cl
import contextual_loss.fuctional as F
# input features
img1 = torch.rand(1, 3, 96, 96)
img2 = torch.rand(1, 3, 96, 96)
# contextual loss
criterion = cl.ContextualLoss()
loss = criterion(img1, img2)
# functional call
loss = F.contextual_loss(img1, img2, band_width=0.1, loss_type='cosine')
# comparing with VGG features
# if `use_vgg` is set, VGG model will be created inside of the criterion
criterion = cl.ContextualLoss(use_vgg=True, vgg_layer='relu5_4')
loss = criterion(img1, img2)
Thanks to the owners of the following awesome implementations.