MountaintopLotus / braintrust

A Dockerized platform for running Stable Diffusion, on AWS (for now)
Apache License 2.0
1 stars 2 forks source link

Stable Diffusion on Docker on AWS #47

Closed JohnTigue closed 1 year ago

JohnTigue commented 1 year ago

[Originally this was entitled "A1111 on Docker" but got renamed to "Stable Diffusion on Docker on AWS" once it was realized that there one a single repo which has Dockerized A1111 and InvokeAI]

A Docker deploy is one specific way of implementing "Deploy A1111 on AWS (#5). Docker sounds desirable, but this solution does NOT work on MacOS because there Docker has no access to the GPUs.

JohnTigue commented 1 year ago

Yup, wrong type of GPU on a G4ad.xlarge: AMD Radeon Pro V520 GPUs

JohnTigue commented 1 year ago

This issue is now getting worked out in https://github.com/ManyHands/hypnowerk/issues/59#issuecomment-1384375355.

It is also partially in https://github.com/ManyHands/hypnowerk/issues/5#issuecomment-1384304157

JohnTigue commented 1 year ago

Let's regroup from the work in #5 and #59.

Then, hopefully just:

docker compose --profile download up --build
# wait until its done, then:
docker compose --profile [ui] up --build
# where [ui] is one of: invoke | auto | auto-cpu | sygil | sygil-sl
JohnTigue commented 1 year ago

In the meantime (while AWS is noodling if they want to rent me some more GPUs…) stop Take One (so, I have 4 credits to play with) and spin up Take Two:

JohnTigue commented 1 year ago

It's up. SSH in and back to stale version of Docker:

$ git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git
$ cd stable-diffusion-webui-docker
$ docker compose --profile download up --build
unknown flag: --profile
JohnTigue commented 1 year ago

On Take Two:

$ docker --version
Docker version 20.10.22, build 3a2c30b

Yet, Docker says 20.10.22 is the latest. So what gives?

JohnTigue commented 1 year ago

A1111 seems to be running but needs network config for outside access:

webui-docker-auto-1  | Loading weights [cc6cb27103] from /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt
webui-docker-auto-1  | Applying xformers cross attention optimization.
webui-docker-auto-1  | Textual inversion embeddings loaded(0):
webui-docker-auto-1  | Model loaded in 25.3s (3.9s create model, 21.4s load weights).
webui-docker-auto-1  | Running on local URL:  http://0.0.0.0:7860
webui-docker-auto-1  |
webui-docker-auto-1  | To create a public link, set `share=True` in `launch()`.
JohnTigue commented 1 year ago

Invoke seems to go better, but now I think I'm dealing with EC2 port opening, hopefully:

webui-docker-invoke-1  | >> Model loaded in 43.40s
webui-docker-invoke-1  | >> Max VRAM used to load the model: 2.17G
webui-docker-invoke-1  | >> Current VRAM usage:2.17G
webui-docker-invoke-1  | >> Current embedding manager terms: *
webui-docker-invoke-1  | >> Setting Sampler to k_lms
webui-docker-invoke-1  |
webui-docker-invoke-1  | * --web was specified, starting web server...
webui-docker-invoke-1  | >> Initialization file /stable-diffusion/invokeai.init found. Loading...
webui-docker-invoke-1  | * Initializing, be patient...
webui-docker-invoke-1  | >> Started Invoke AI Web Server!
webui-docker-invoke-1  | Point your browser at http://localhost:7860 or use the host's DNS name or IP address.
JohnTigue commented 1 year ago

Yeah, going to need to open 7860 it looks like:

Screen Shot 2023-01-16 at 2 22 25 PM
JohnTigue commented 1 year ago

Yup. Hello, InvokeAI: http://52.42.196.43:7860/

Screen Shot 2023-01-16 at 2 26 15 PM