GaParmar / img2img-turbo

One-step image-to-image with Stable Diffusion turbo: sketch2image, day2night, and more
MIT License
1.68k stars 190 forks source link
computer-vision deep-learning generative-adversarial-network generative-art stable-diffusion

img2img-turbo

Paper | Sketch2Image Demo

Quick start: Running Locally | Gradio (locally hosted) | Training

Cat Sketching

Fish Sketching

We propose a general method for adapting a single-step diffusion model, such as SD-Turbo, to new tasks and domains through adversarial learning. This enables us to leverage the internal knowledge of pre-trained diffusion models while achieving efficient inference (e.g., for 512x512 images, 0.29 seconds on A6000 and 0.11 seconds on A100).

Our one-step conditional models CycleGAN-Turbo and pix2pix-turbo can perform various image-to-image translation tasks for both unpaired and paired settings. CycleGAN-Turbo outperforms existing GAN-based and diffusion-based methods, while pix2pix-turbo is on par with recent works such as ControlNet for Sketch2Photo and Edge2Image, but with one-step inference.

One-Step Image Translation with Text-to-Image Models
Gaurav Parmar, Taesung Park, Srinivasa Narasimhan, Jun-Yan Zhu
CMU and Adobe, arXiv 2403.12036


Results

Paired Translation with pix2pix-turbo

Edge to Image

Generating Diverse Outputs

By varying the input noise map, our method can generate diverse outputs from the same input conditioning. The output style can be controlled by changing the text prompt.

Unpaired Translation with CycleGAN-Turbo

Day to Night

Night to Day

Clear to Rainy

Rainy to Clear


Method

Our Generator Architecture: We tightly integrate three separate modules in the original latent diffusion models into a single end-to-end network with small trainable weights. This architecture allows us to translate the input image x to the output y, while retaining the input scene structure. We use LoRA adapters in each module, introduce skip connections and Zero-Convs between input and output, and retrain the first layer of the U-Net. Blue boxes indicate trainable layers. Semi-transparent layers are frozen. The same generator can be used for various GAN objectives.

Getting Started

Environment Setup

Unpaired Image Translation (CycleGAN-Turbo)

Gradio Demo

Training with your own data

Acknowledgment

Our work uses the Stable Diffusion-Turbo as the base model with the following LICENSE.