This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers here.
You need a webcam to run this demo. 🤗
See a collecting with live demos here
You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU
python -m venv venv
source venv/bin/activate
pip3 install -r server/requirements.txt
cd frontend && npm install && npm run build && cd ..
python server/main.py --reload --pipeline img2imgSDTurbo
Don't forget to fuild the frontend!!!
cd frontend && npm install && npm run build && cd ..
You can build your own pipeline following examples here here,
python server/main.py --reload --pipeline img2img
python server/main.py --reload --pipeline txt2img
python server/main.py --reload --pipeline controlnet
Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. Learn more here or technical report
python server/main.py --reload --pipeline controlnetLoraSD15
or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
python server/main.py --reload --pipeline controlnetLoraSDXL
python server/main.py --reload --pipeline txt2imgLora
python server/main.py --reload --pipeline txt2imgLoraSDXL
img2img
txt2img
controlnet
txt2imgLora
controlnetLoraSD15
controlnetLoraSDXL
txt2imgLoraSDXL
img2imgSDXLTurbo
controlnetSDXLTurbo
img2imgSDTurbo
controlnetSDTurbo
controlnetSegmindVegaRT
img2imgSegmindVegaRT
--host
: Host address (default: 0.0.0.0) --port
: Port number (default: 7860) --reload
: Reload code on change --max-queue-size
: Maximum queue size (optional)--timeout
: Timeout period (optional)--safety-checker
: Enable Safety Checker (optional) --torch-compile
: Use Torch Compile--use-taesd
/ --no-taesd
: Use Tiny Autoencoder --pipeline
: Pipeline to use (default: "txt2img") --ssl-certfile
: SSL Certificate File (optional)--ssl-keyfile
: SSL Key File (optional)--debug
: Print Inference time --compel
: Compel option --sfast
: Enable Stable Fast --onediff
: Enable OneDiffIf you run using bash build-run.sh
you can set PIPELINE
variables to choose the pipeline you want to run
PIPELINE=txt2imgLoraSDXL bash build-run.sh
and setting environment variables
TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL
If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my comment
openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``
docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
reuse models data from host to avoid downloading them again, you can change ~/.cache/huggingface
to any other directory, but if you use hugingface-cli locally, you can share the same cache
docker run -ti -p 7860:7860 -e HF_HOME=/data -v ~/.cache/huggingface:/data --gpus all lcm-live
or with environment variables
docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live