LlamaEdge / sd-api-server

The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge
https://llamaedge.com/
13 stars 2 forks source link

Stable-Diffusion-API-Server

This project is a RESTful API server that provides image generation and editing services based on Stable Diffusion models. The APIs are compatible with OpenAI APIs of image generation and editing.

[!NOTE] The project is still under active development. The existing features still need to be improved and more features will be added in the future.

Quick Start

Setup

Run sd-api-server

Usage

Image Generation

A cute baby sea otter

Image Editing

A cute baby sea otter with blue eyes

Build

If the build process is successful, sd-api-server.wasm will be generated in target/wasm32-wasip1/release/.

CLI Options

$ wasmedge target/wasm32-wasip1/release/sd-api-server.wasm -h

LlamaEdge-Stable-Diffusion API Server

Usage: sd-api-server.wasm [OPTIONS] --model-name <MODEL_NAME> <--model <MODEL>|--diffusion-model <DIFFUSION_MODEL>>

Options:
  -m, --model-name <MODEL_NAME>
          Sets the model name
      --model <MODEL>
          Path to full model [default: ]
      --diffusion-model <DIFFUSION_MODEL>
          Path to the standalone diffusion model file [default: ]
      --vae <VAE>
          Path to vae [default: ]
      --clip-l <CLIP_L>
          Path to the clip-l text encoder [default: ]
      --t5xxl <T5XXL>
          Path to the the t5xxl text encoder [default: ]
      --lora-model-dir <LORA_MODEL_DIR>
          Path to the lora model directory
      --control-net <CONTROL_NET>
          Path to control net model
      --control-net-cpu
          Keep controlnet on cpu (for low vram)
      --threads <THREADS>
          Number of threads to use during computation. Default is -1, which means to use all available threads [default: -1]
      --clip-on-cpu
          Keep clip on cpu (for low vram)
      --vae-on-cpu
          Keep vae on cpu (for low vram)
      --task <TASK>
          Task type [default: full] [possible values: text2image, image2image, full]
      --socket-addr <SOCKET_ADDR>
          Socket address of LlamaEdge API Server instance. For example, `0.0.0.0:8080`
      --port <PORT>
          Port number [default: 8080]
  -h, --help
          Print help (see more with '--help')
  -V, --version
          Print version