Open kavaphis opened 1 year ago
not an easy one, but definitely looking into it - cc @BinaryQuantumSoul
not an easy one, but definitely looking into it - cc @BinaryQuantumSoul
AWS China Solution Team had build a solution base on SageMaker/Lambda and original Stablediffusion WEBUI. We are considering a more general solution. If possible, we are eager to collaborate with your team.
Their solution is quite advanced, they also implement remote server architecture and model training.
What I'm aiming to build is an improved version of StableHorde integration and OmniInfer integration, the first one is a bit outdated and the second one works in A1111 but not SD.Next. They both rely on a separate script below the txt2img options with its own model list/browser.
In my case it will be a SD.Next extension where the original extra networks browser is populated with remote options, I will try to integrate a maximum with the UI to have the same user experience than with local inference. It will also add support to use SD.Next API. So you will need to build your own architecture to distribute the api calls through different running SD.Next API.
Their solution is quite advanced, they also implement remote server architecture and model training.
What I'm aiming to build is an improved version of StableHorde integration and OmniInfer integration, the first one is a bit outdated and the second one works in A1111 but not SD.Next. They both rely on a separate script below the txt2img options with its own model list/browser.
In my case it will be a SD.Next extension where the original extra networks browser is populated with remote options, I will try to integrate a maximum with the UI to have the same user experience than with local inference. It will also add support to use SD.Next API. So you will need to build your own architecture to distribute the api calls through different running SD.Next API.
Thank you very much, this will bring great help to our solution. In addition, can we communicate through email or social media.
Discord is preferred means.
If possible, we are eager to collaborate with your team.
@Keshawn Please join our discord. Also, I'd need a running SD.Next API instance with a few models of each type (checkpoints, diffusers, loras, lycoris, textual inversion, vae, controlnet, upscale, etc.) so that I can test and develop the extension
Feature description
hello,
Since "Task Queue" and "API Mode" both supported by SD.next.
Can it add a feature like "Client/Server"?
That is to say, deploy the WEB UI on several different users' laptops without independent graphics cards, and all generation tasks are sent to a group of back-end servers.
This would be a great solution to the problem of resource sharing.
For example, our team now has nearly 100 people, and it's not possible for us to equip everyone with a RTX4090. Although we can start several WEB-UIs on each server with RTX4090, we can't balance the load between the servers. We can only bind user to the special server.
Version Platform Description
No response