bes-dev / pytorch_clip_guided_loss

A simple library that implements CLIP guided loss in PyTorch.
https://pypi.org/project/pytorch-clip-guided-loss/
Apache License 2.0
77 stars 5 forks source link
deep-learning gan image-synthesis neural-art synthetic-media vqgan-clip

pytorch_clip_guided_loss: Pytorch implementation of the CLIP guided loss for Text-To-Image, Image-To-Image, or Image-To-Text generation.

A simple library that implements CLIP guided loss in PyTorch.

Downloads Downloads Downloads

Install package

pip install pytorch_clip_guided_loss

Install the latest version

pip install --upgrade git+https://github.com/bes-dev/pytorch_clip_guided_loss.git

Features

Usage

Simple code

import torch
from pytorch_clip_guided_loss import get_clip_guided_loss

loss_fn = get_clip_guided_loss(clip_type="ruclip", input_range = (-1, 1)).eval().requires_grad_(False)
# text prompt
loss_fn.add_prompt(text="text description of the what we would like to generate")
# image prompt
loss_fn.add_prompt(image=torch.randn(1, 3, 224, 224))

# variable
var = torch.randn(1, 3, 224, 224).requires_grad_(True)
loss = loss_fn.image_loss(image=var)["loss"]
loss.backward()
print(var.grad)

VQGAN-CLIP

We provide our tiny implementation of the VQGAN-CLIP pipeline for image generation as an example of the usage of our library. To start using our implementation of the VQGAN-CLIP please follow by documentation.

Zero-shot Object Detection

We provide our tiny implementation of the object detector based on Selective Search region proposals and CLIP guided loss. To start using our implementation of the ClipRCNN please follow by documentation.