A simple library that implements CLIP guided loss in PyTorch.
pip install pytorch_clip_guided_loss
pip install --upgrade git+https://github.com/bes-dev/pytorch_clip_guided_loss.git
import torch
from pytorch_clip_guided_loss import get_clip_guided_loss
loss_fn = get_clip_guided_loss(clip_type="ruclip", input_range = (-1, 1)).eval().requires_grad_(False)
# text prompt
loss_fn.add_prompt(text="text description of the what we would like to generate")
# image prompt
loss_fn.add_prompt(image=torch.randn(1, 3, 224, 224))
# variable
var = torch.randn(1, 3, 224, 224).requires_grad_(True)
loss = loss_fn.image_loss(image=var)["loss"]
loss.backward()
print(var.grad)
We provide our tiny implementation of the VQGAN-CLIP pipeline for image generation as an example of the usage of our library. To start using our implementation of the VQGAN-CLIP please follow by documentation.
We provide our tiny implementation of the object detector based on Selective Search region proposals and CLIP guided loss. To start using our implementation of the ClipRCNN please follow by documentation.