stable diffusion multi-user server API deployment that supports autoscaling, webui extension API...
Features:
You can build your own UI, community features, account login&payment, etc. based on these functions!
The project can be roughly divided into two parts: django server code, and stable-diffusion-webui code that we use to initialize and run models. And I'll mainly explain the django server part.
In the main project directory:
modules/
: stable-diffusion-webui modulesmodels/
: stable diffusion modelssd_multi/
: the django project name
urls.py
: server API path configurationsimple/
: the main django code
views.py
: main API processing logiclb_views.py
: load-balancing APIrequirements.txt
: stable diffusion pip requirementssetup.sh
: run it with options to setup the server environmentgen_http_conf.py
: called in setup.sh
to setup the apache configurationmanage.py
)sudo bash setup.sh
with options(checkout the setup.sh
for options)(recommende order: follow the file order: env
, venv
, sd_model
, apache
)
/etc/apache2/ports.conf
and /etc/apache2/sites-available/sd_multi.conf
sudo service apache2 restart
/
: view the homepage, used to test that apache is configured successfully/txt2img_v2/
: txt2img with the same parameters as sd-webui, also supports extension parameters(such as controlnet)/img2img_v2/
: img2img with the same parameters as sd-webui, also supports extension parameters(such as controlnet)old_django_api.md
manage.py
)sudo bash setup.sh lb
mv sd_multi/urls.py sd_multi/urls1.py && mv sd_multi/urls_lb.py sd_multi/urls.py
ip_list
variable with your own server ip+port in simple/lb_views.py
sudo service apache2 restart
ip+port/multi_demo/
url pathIf you don't want to deploy the load balancing server but still want to test the functions, you can start the load-balancing server on your local computer.
ip_list
variable with your own GPU server ip+port in simple/lb_views.py
manage.py
)mv sd_multi/urls.py sd_multi/urls1.py
&& mv sd_multi/urls_lb.py sd_multi/urls.py
(Rename)python manage.py runserver
/multi_demo/
pathFinally, you can call your http API(test it using postman).
Features:
see sd-docker-slim for deploy guide and also a ready-to-use docker image.
A replicate demo is deployed here
Features:
Deploy steps:
git clone https://github.com/wolverinn/stable-diffusion-multi-user.git
cd stable-diffusion-multi-user/replicate-cog-slim/
replicate-cog-slim/cog.yaml
to your own replicate modelreplicate-cog-slim/predicy.py
's predict()
function for custom API inputs & outputsreplicate-cog-slim/
cog login
cog push
Then you can see your model on replicate, and you can use it via API or replicate website.