Open flying-sheep opened 2 years ago
@flying-sheep Unfortunately, AMD GPUs are not currently supported.
I know it that the auto
fork can run on AMD GPU https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs, but I don't have any to test it.
If you would like to contribute, that would be great!
This docker-compose file seems to support passing AMD GPUs to docker: https://github.com/compscidr/lolminer-docker/blob/main/docker-compose.yml
But I don’t know what’s necessary software wise. Making just the device change, I get:
webui-docker-automatic1111-1 | txt2img:
webui-docker-automatic1111-1 | /opt/conda/lib/python3.8/site-packages/torch/autocast_mode.py:162: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
Ah, seems like PyTorch needs to be installed via pip to get ROCm support. but it’s unclear to me if that means that it somehow detects the GPU while building, because if the built PyTorch package is capable of being run by both CUDA and ROCm, there’s no reason to not distribute that via anaconda, right?
You are asking difficult questions my friend.
Welp, apparently nvidia has pressed enough people into their monopoly that I’m the first one :anguished:
Have a look at : https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs#running-inside-docker
You need to passthrough the GPU into the docker container for ROCM to use it.
@JoeMojoJones thank you, this link is helpful for reference.
The problem is I have no AMD GPU so I can't even test if the code works.
@AbdBarho I have Pytorch installed via pip on my machine, what do I need to modify in the docker file to get AMD working? Maybe if it works I can do a PR for this?
@GBora that's great! unfortunately, I have no experience of working with AMD GPUs and docker for deep learning. Maybe this link above could help guide you.
I would guess the changes would probably be related to the base image and the deploy config in docker compose, but this is just a guess.
lem is I have no AM
Please perform changes to the docker-compose file, and then let me know, I'll pull changes and try to run and answer you if everything is correct :) At this moment invoke doesn't returns the issue in the disscussion. I have RX 6600, will try to run it.
I got it working pretty easily for AMD
https://github.com/AbdBarho/stable-diffusion-webui-docker/pull/362/files
Awesome, your branch works nicely indeed!
Finally a way to use the potential of GPU lol.
hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c
Ok :) I just needed to execute this :
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update sudo apt-get install -y nvidia-container-toolkit sudo systemctl restart docker
hello, I have this error although I have Tesla T4 and ubuntu 22.04. can somebody help me pls. I thought using Docker might make my life easier :'c
@flying-sheep Was it merged to master?
No, doesn’t look like it: #362
I just checked it out locally and ran it.
@mtthw-meyer Does your fork still work? I'm trying to get that up but it complains "Found no NVIDIA driver on your system". This is usually bypassed by passing "--skip-torch-cuda-test" to launch.py but I don't see where launch.py gets used.
Nevermind, I got it working. I had to update some things in the dockerfile for torch, install some additional packages, edit the requirements file to get auto working. Still trying to sort out invokeai
@tgm4883 could you please open a PR or share your modifications to fix the container?
@Coniface
I'll try to share that when I get home tonight. It's some fixes on the AMD fork and I know so little about SD that it might have other issues but it runs and works with the plugins I use.
I'm attaching the git diff I made. I also have a build script that builds and tags the image. I've only gotten the automatic1111 interface to work. Let me know if you have any questions.
TIMESTAMP=$(date +%Y%m%d.%H%M%S)
export BUILD_DATE=$TIMESTAMP
docker rm -f test-sd-auto-1 &>/dev/null || :
docker image rm -f sd:auto-amd-latest &>/dev/null || :
docker compose build auto-amd
docker tag sd:auto-amd-$BUILD_DATE sd:auto-amd-latest
Updated the file I uploaded to clean it up a little bit 20230918.txt
As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.
I'd be interested in seeing whether others are working on something similar or have thoughts on this!
As of writing, I found that the sd-webui documentations are out-of-date for AMD GPUs on Linux (I'm currently using Fedora 39 and want to run it on a AMD 6900XT): https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs
It also skips a lot of details on the necessary prerequisites for setting up rocm/hip related dependencies. I think the easiest way is to use the rocm/pytorch docker image after all though. Even the rocm documentation suggests it as one of the first options for setup. One sticking point is that there are a lot of factors affecting whether PyTorch gets installed correctly to detect and use your AMD GPU. I'm currently working on a Docker image that could specifically deploy the stable-diffusion-webui via Docker on AMD GPU systems with one-click.
I'd be interested in seeing whether others are working on something similar or have thoughts on this!
Even though i also think the AMD docs are miserable out-of-date and i just can't understand why, you don't need to install any special rocm/hip system dependencies. They only thing needed is the special pytorch-rocm python package.
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6
PyTorch - get started locally
Any news on that matter? I'm searching for a way to run webui on a 680m.
As an update, I was able to run AUTOMATIC on Fedora 39 using rocm5.7.1 provided through repo and this version of torch and vision
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7
Any news on that matter? I'm searching for a way to run webui on a 680m.
I have a laptop with the same chip as well but never tried. You have to make sure your architecture is supported by referring to the compatibility matrix (e.g. https://rocm.docs.amd.com/en/docs-5.7.1/release/gpu_os_support.html)
I also found somebody commenting about this in rocm repo: https://github.com/ROCm/ROCm/discussions/2932#discussioncomment-8615032
Describe the bug
I have a AMD Radeon RX 6800 XT. Stable diffusion supports this GPU.
After building this image, it fails to run:
Steps to Reproduce
docker compose --profile auto up --build
(after download)Hardware / Software: