Open nomeata opened 2 years ago
Maybe I didn't understand the question , but isn't --share argument useful here? It gives you a public URL that you can access while using your laptop in this case for processing. And also with --gradio-auth you can give it a user:password.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Run-with-Custom-Parameters
That’s close: --share
helps if I have a server with a GPU that I can manage (e.g. runpod or so). Maybe good enough.
But it’d be even more hassleless (and also more efficient on resources) if I don't need a full persistent server (or virtual slice) to cater for my needs, but could simply use a stateless API like replicate.com or https://stablehorde.net/. The UI would run locally on my laptop (easy access to the generated files!), and it’d just invoke the API to do the actual processing of images.
Does that make sense?
I think i understand what you mean, you want a local gui with a remote gpu being served behind an api with a token.
It can technically be done by the share feature if it had an API that allowed you to set all values and parameters using the endpoints, sadly there are no exposed apis, the only exposed thing is the gradio interface. But yes, technically it could be done.
And if you are already on that you could tell the local interface that you have a couple computers and split the work on two or more. :)
sadly there are no exposed apis, the only exposed thing is the gradio interface
I wouldn't expect stable-diffusion-webui
offer that API, but something generic, frontend independent, like https://replicate.com/stability-ai/stable-diffusion (see https://replicate.com/docs/reference/http for API docs), that “just” runs the model. And similar endpoint for other models (e.g. the upscaling models). (I think huggingface has a similar serverless model processing feature)
But maybe what you are saying is that stable-diffusion-webui
needs more control over the model processing than provided by these serveless APIs?
In that case maybe a cog repo exposing that could be created and uploaded to replicate.com, exposing all the needed nobs.
Indeed splitting the work would then be trivial; these serverless APIs surely have no trouble running parallel requests (as long as you pay for them :-))
@nomeata did you manage to make any progress on this? Also curious.
I second this suggestion
Having an API would be great for allowing chat bots to use this system for example (e.g. Discord bots)
Sure, --share
exists and works fine enough for manual usage, but it would be great to have an api endpoint for when you want to generate something through another program.
No progress, no. But also not so pressing, using colab worked fine for me :-)
@4n0nh4x0r: This is about the WebUI using an API for the render backend, not WebUI exposing an API for bots etc, though.
oh, that makes sense actually, but it would technically fit into the same idea wouldnt it? The rendering system would expose an api for requests, and the webui contacting said api This could technically also be done locally, so that the webui and rendering system are on the same machine
If you’d like to use from your scripts the same API backend that I want WebUI to use, you can just use for example https://replicate.com/stability-ai/stable-diffusion now, can’t you?
i fail to see how that publicly hosted ai is the same as this privately hosted one where i can use whatever model i want, no filters, and fully customisability
There is an issue talking about integrating StableHorde into this UI:
As well as another that I just opened with regards to exploring integrations with Project nataili and the StableCabal backend API's:
I also came across a few other distributed training/generation things that I linked to on this issue:
Train a Stable Diffusion model over the internet with Hivemind
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
I also noticed that one of the core things deepspeed supports is being able to distribute a workload across multiple GPUs (including remote GPUs). Here's a tutorial I found on it:
Training On Multiple Nodes With DeepSpeed
So I wonder if that might also be one possible way of doing more distributed generation?
This is old, and there are likely better solutions, but for me I solved this using an ssh tunnel.
For this to work you need to install open-ssh server on the machine that is hosting automatic1111 webui, set up shared keys ssh login, and use an ssh client that allows creation of tunnels, I use Bitvise.
I now simply open an ssh session, start webui-user.bat and access the same URL as on the host machine ie. http://127.0.0.1:7860 because I set up the tunnel to map exactly to what the host is using. You can do whatever as long as on the server side you use localhost:7860.
@vmajor thanks for the heads-up about the ssh-tunnel, great idea.
Just in case it helps anyone I'm doing this https://github.com/adriangalilea/LambdaTunnel/tree/main
This is old, and there are likely better solutions, but for me I solved this using an ssh tunnel.
For this to work you need to install open-ssh server on the machine that is hosting automatic1111 webui, set up shared keys ssh login, and use an ssh client that allows creation of tunnels, I use Bitvise.
I now simply open an ssh session, start webui-user.bat and access the same URL as on the host machine ie. http://127.0.0.1:7860 because I set up the tunnel to map exactly to what the host is using. You can do whatever as long as on the server side you use localhost:7860.
Hi. I'm try to get this to work. Can you help me out?
I've installed and setup Bitvise client and server on the computers with shared keys. The tunneling works fine but i can't use http://127.0.0.1:7860/ on the client to acccess the webui. What i noticed is: when i use the port 7860 for ssh, either ssh or webui (when forced 7860 port) won't start because the port is "already used". What do you mean with "setting up the tunnel to map exactly what the host is using" ? Would really appreciate an answer to this :)
This is old, and there are likely better solutions, but for me I solved this using an ssh tunnel. For this to work you need to install open-ssh server on the machine that is hosting automatic1111 webui, set up shared keys ssh login, and use an ssh client that allows creation of tunnels, I use Bitvise. I now simply open an ssh session, start webui-user.bat and access the same URL as on the host machine ie. http://127.0.0.1:7860 because I set up the tunnel to map exactly to what the host is using. You can do whatever as long as on the server side you use localhost:7860.
Hi. I'm try to get this to work. Can you help me out?
I've installed and setup Bitvise client and server on the computers with shared keys. The tunneling works fine but i can't use http://127.0.0.1:7860/ on the client to acccess the webui. What i noticed is: when i use the port 7860 for ssh, either ssh or webui (when forced 7860 port) won't start because the port is "already used". What do you mean with "setting up the tunnel to map exactly what the host is using" ? Would really appreciate an answer to this :)
The project I shared previously does most of this automatically, it is a bit confusing as it is for my own convenience mostly. You can skim trough the script in it if you wish.
You need to login into SSH normally and start the automatic1111 webui normally in there, ideally with a tmux session, so you can close the ssh and the webui remains open.
Then you need to run:
ssh -t -N -L 7860:localhost:7860 ubuntu@$server_ip
A tunnel from the remote 7860 port into the local 7860 port.
The project I shared previously does most of this automatically, it is a bit confusing as it is for my own convenience mostly. You can skim trough the script in it if you wish.
You need to login into SSH normally and start the automatic1111 webui normally in there, ideally with a tmux session, so you can close the ssh and the webui remains open.
Then you need to run:
ssh -t -N -L 7860:localhost:7860 ubuntu@$server_ip
A tunnel from the remote 7860 port into the local 7860 port.
Thank you for your reply. I skimmed through your guide before, the problem is i only understand a part of it. I'm quite inexperience here. I'm not trying to connect to a remote gpu over Lambda but rather use my home pc (WIN10), over local network or remote over own VPN, while having the webui on my laptop. Do you know how i can set up Bitvise SSH correctly? (It's basically working already, but i assume i'm missing the step of having the correct tunneling for the webui).
"ssh -t -N -L 7860:localhost:7860 ubuntu@$server_ip" Where do i run this and what has Ubuntu to do with it? :)
webui will be running on the port 7860 of the machine you run it from. You want to create a tunnel from that machine to the one you will be using it from.
I have 0 experience using bitvise, I always use the terminal for ssh.
In that command ubuntu refers to the username, as it is important for my particular connection login.
remember you need to create a tunnel from the remote port to the local port, 7860.
First be sure you can normally run ssh connection, then try to do the tunnel.
create a tunnel from the remote port to the local port
Thank you for your help. It was a bit of trial and error but i got it to work by modifying the C2S tab on Bitvise.
If your problem is local GPU limitations and you want to have greater control of the environment, here are two options:
@nomeata Do you still need replicate.com api? I have created one txt2img api. You can check it. https://replicate.com/llsean/cog-a1111-webui
https://github.com/mudler/LocalAI/discussions/1516 There is also demand for LocalAI compatibility
I'd like to play around with this UI running locally, but my laptop doesn’t have a GUI. But there are public APIs to run models (e.g. https://replicate.com/stability-ai/stable-diffusion), and I’d be willing to pay for the resources I use.
I can use the colab notebook, and that’s great, but it feels like a clunky work-around.
So would it be feasible running
stable-diffusion-webui
locally, telling it my replicate.com (or similar service) API key, and start using it?