AbdBarho / stable-diffusion-webui-docker

Easy Docker setup for Stable Diffusion with user-friendly UI
Other
6.81k stars 1.14k forks source link

AMD GPUs #63

Open flying-sheep opened 2 years ago

flying-sheep commented 2 years ago

Describe the bug

I have a AMD Radeon RX 6800 XT. Stable diffusion supports this GPU.

After building this image, it fails to run:

 => => naming to docker.io/library/webui-docker-automatic1111                                                                                                                                                0.0s
[+] Running 1/1
 ⠿ Container webui-docker-automatic1111-1  Created                                                                                                                                                           0.2s
Attaching to webui-docker-automatic1111-1
Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

Steps to Reproduce

  1. Run docker compose --profile auto up --build (after download)

Hardware / Software:

AbdBarho commented 2 years ago

@flying-sheep Unfortunately, AMD GPUs are not currently supported. I know it that the auto fork can run on AMD GPU https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs, but I don't have any to test it.

If you would like to contribute, that would be great!

flying-sheep commented 2 years ago

This docker-compose file seems to support passing AMD GPUs to docker: https://github.com/compscidr/lolminer-docker/blob/main/docker-compose.yml

But I don’t know what’s necessary software wise. Making just the device change, I get:

webui-docker-automatic1111-1  | txt2img: 
webui-docker-automatic1111-1  | /opt/conda/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
flying-sheep commented 2 years ago

Ah, seems like PyTorch needs to be installed via pip to get ROCm support. but it’s unclear to me if that means that it somehow detects the GPU while building, because if the built PyTorch package is capable of being run by both CUDA and ROCm, there’s no reason to not distribute that via anaconda, right?

AbdBarho commented 2 years ago

You are asking difficult questions my friend.

flying-sheep commented 2 years ago

Welp, apparently nvidia has pressed enough people into their monopoly that I’m the first one :anguished:

JoeMojoJones commented 2 years ago

Have a look at : https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs#running-inside-docker

You need to passthrough the GPU into the docker container for ROCM to use it.

AbdBarho commented 2 years ago

@JoeMojoJones thank you, this link is helpful for reference.

The problem is I have no AMD GPU so I can't even test if the code works.

GBora commented 2 years ago

@AbdBarho I have Pytorch installed via pip on my machine, what do I need to modify in the docker file to get AMD working? Maybe if it works I can do a PR for this?

AbdBarho commented 2 years ago

@GBora that's great! unfortunately, I have no experience of working with AMD GPUs and docker for deep learning. Maybe this link above could help guide you.

I would guess the changes would probably be related to the base image and the deploy config in docker compose, but this is just a guess.

NazarYermolenko commented 1 year ago

lem is I have no AM

Please perform changes to the docker-compose file, and then let me know, I'll pull changes and try to run and answer you if everything is correct :) At this moment invoke doesn't returns the issue in the disscussion. I have RX 6600, will try to run it.

mtthw-meyer commented 1 year ago

I got it working pretty easily for AMD

https://github.com/AbdBarho/stable-diffusion-webui-docker/pull/362/files

flying-sheep commented 1 year ago

Awesome, your branch works nicely indeed!

Finally a way to use the potential of GPU lol.

svupper commented 1 year ago

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

svupper commented 1 year ago

Ok :) I just needed to execute this :

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker

hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c

f1am3d commented 1 year ago

@flying-sheep Was it merged to master?

flying-sheep commented 1 year ago

No, doesn’t look like it: #362

I just checked it out locally and ran it.

tgm4883 commented 1 year ago

@mtthw-meyer Does your fork still work? I'm trying to get that up but it complains "Found no NVIDIA driver on your system". This is usually bypassed by passing "--skip-torch-cuda-test" to launch.py but I don't see where launch.py gets used.

Nevermind, I got it working. I had to update some things in the dockerfile for torch, install some additional packages, edit the requirements file to get auto working. Still trying to sort out invokeai

Coniface commented 1 year ago

@tgm4883 could you please open a PR or share your modifications to fix the container?

tgm4883 commented 1 year ago

@Coniface

I'll try to share that when I get home tonight. It's some fixes on the AMD fork and I know so little about SD that it might have other issues but it runs and works with the plugins I use.

tgm4883 commented 1 year ago

I'm attaching the git diff I made. I also have a build script that builds and tags the image. I've only gotten the automatic1111 interface to work. Let me know if you have any questions.

TIMESTAMP=$(date +%Y%m%d.%H%M%S)
export BUILD_DATE=$TIMESTAMP
docker rm -f test-sd-auto-1 &>/dev/null || :
docker image rm -f sd:auto-amd-latest &>/dev/null || :
docker compose build auto-amd
docker tag sd:auto-amd-$BUILD_DATE sd:auto-amd-latest

Updated the file I uploaded to clean it up a little bit 20230918.txt

justin13888 commented 1 year ago

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

cloudishBenne commented 11 months ago

As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs

It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.

I'd be interested in seeing whether others are working on something similar or have thoughts on this!

Even though i also think the AMD docs are miserable out-of-date and i just can't understand why, you don't need to install any special rocm/hip system dependencies. They only thing needed is the special pytorch-rocm python package. pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6 PyTorch - get started locally

tristan-k commented 7 months ago

Any news on that matter? I'm searching for a way to run webui on a 680m.

justin13888 commented 7 months ago

As an update, I was able to run AUTOMATIC on Fedora 39 using rocm5.7.1 provided through repo and this version of torch and vision

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7

justin13888 commented 7 months ago

Any news on that matter? I'm searching for a way to run webui on a 680m.

I have a laptop with the same chip as well but never tried. You have to make sure your architecture is supported by referring to the compatibility matrix (e.g. https://rocm.docs.amd.com/en/docs-5.7.1/release/gpu_os_support.html)

I also found somebody commenting about this in rocm repo: https://github.com/ROCm/ROCm/discussions/2932#discussioncomment-8615032