NickLucche / stable-diffusion-nvidia-docker

GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
MIT License
359 stars 43 forks source link

problem with getting multiple GPUs to work #34

Open alpha754293 opened 6 months ago

alpha754293 commented 6 months ago

I executed your steps to get this installed and up and running and I was able to get it working with a single 3090 (single GPU operation is the default).

However, when I try to add in my second 3090 with this command:

sudo docker run --name stable-diffusion --pull=always --gpus all -it -p 7860:7860 -e DEVICES=all nicklucche/stable-diffusion

This is the error that I get:

latest: Pulling from nicklucche/stable-diffusion
Digest: sha256:a7bbc5df2f879279513cfa26b51e0c42c1d8298944dc474e2500535ec23b5be4
Status: Image is up to date for nicklucche/stable-diffusion:latest
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Traceback (most recent call last):
  File "server.py", line 22, in <module>
    pipeline = init_pipeline()
  File "/app/main.py", line 56, in init_pipeline
    n_procs, devices, model_parallel_assignment=model_ass, **kwargs
  File "/app/parallel.py", line 168, in from_pretrained
    with open("./clip_config.pickle", "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: './clip_config.pickle'

And I cloned your git into my home directory (which automatically creates the stable-diffusion-nvidia-docker sub-directory).

Your help and guidance in terms of how I can get multiple GPUs up and running is greatly appreciated.

Thank you.

Hardware: Intel 6700K Asus Z170-E motherboard 64 GB DDR4-2400 Unbuffered, non ECC RAM 2x Gigabyte 3090

OS: Windows 10 22H2 Ubuntu 22.04 LTS running via WSL2 (it is confirmed via PowerShell that it is running WSL2). I was able to install the "vanilla" Automatic 1111 (as a Docker container), along with Nvidia Docker Container Toolkit, etc.

All of that is up and running.

Thank you.