vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.49k stars 400 forks source link

[Feature]: support Client/Server mode #2153

Open kavaphis opened 1 year ago

kavaphis commented 1 year ago

Feature description

hello,

Since "Task Queue" and "API Mode" both supported by SD.next.

Can it add a feature like "Client/Server"?

That is to say, deploy the WEB UI on several different users' laptops without independent graphics cards, and all generation tasks are sent to a group of back-end servers.

This would be a great solution to the problem of resource sharing.

For example, our team now has nearly 100 people, and it's not possible for us to equip everyone with a RTX4090. Although we can start several WEB-UIs on each server with RTX4090, we can't balance the load between the servers. We can only bind user to the special server.

Version Platform Description

No response

vladmandic commented 1 year ago

not an easy one, but definitely looking into it - cc @BinaryQuantumSoul

Keshawn commented 1 year ago

not an easy one, but definitely looking into it - cc @BinaryQuantumSoul

AWS China Solution Team had build a solution base on SageMaker/Lambda and original Stablediffusion WEBUI. We are considering a more general solution. If possible, we are eager to collaborate with your team.

https://github.com/awslabs/stable-diffusion-aws-extension

BinaryQuantumSoul commented 1 year ago

Their solution is quite advanced, they also implement remote server architecture and model training.

What I'm aiming to build is an improved version of StableHorde integration and OmniInfer integration, the first one is a bit outdated and the second one works in A1111 but not SD.Next. They both rely on a separate script below the txt2img options with its own model list/browser.

In my case it will be a SD.Next extension where the original extra networks browser is populated with remote options, I will try to integrate a maximum with the UI to have the same user experience than with local inference. It will also add support to use SD.Next API. So you will need to build your own architecture to distribute the api calls through different running SD.Next API.

Keshawn commented 1 year ago

Their solution is quite advanced, they also implement remote server architecture and model training.

What I'm aiming to build is an improved version of StableHorde integration and OmniInfer integration, the first one is a bit outdated and the second one works in A1111 but not SD.Next. They both rely on a separate script below the txt2img options with its own model list/browser.

In my case it will be a SD.Next extension where the original extra networks browser is populated with remote options, I will try to integrate a maximum with the UI to have the same user experience than with local inference. It will also add support to use SD.Next API. So you will need to build your own architecture to distribute the api calls through different running SD.Next API.

Thank you very much, this will bring great help to our solution. In addition, can we communicate through email or social media.

vladmandic commented 1 year ago

Discord is preferred means.

BinaryQuantumSoul commented 12 months ago

If possible, we are eager to collaborate with your team.

@Keshawn Please join our discord. Also, I'd need a running SD.Next API instance with a few models of each type (checkpoints, diffusers, loras, lycoris, textual inversion, vae, controlnet, upscale, etc.) so that I can test and develop the extension