Gadersd / stable-diffusion-xl-burn

Stable Diffusion XL ported to Rust's burn framework
MIT License
251 stars 14 forks source link

ROCm HIP support? #1

Open grigio opened 1 year ago

grigio commented 1 year ago

Is possible to run stable-diffusion-xl-burn on AMD ? https://pytorch.org/docs/stable/notes/hip.html

Gadersd commented 1 year ago

It should be possible but I don't use AMD GPUs so I can't easily test it.

codewiz commented 1 year ago

I tried running on a Linux system with amdgpu, and it doesn't seem to work:

cargo run --release --bin sample SDXL1.0 7.5 30 "An elegant bright red crab." crab
...
Loading embedder...
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Torch("Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].\n\nCPU:
...
bzessack commented 1 year ago

I have successfully generated a crab on my RX 6600 XT 8GB VRAM on linux

Here is my setup, maybe it will help someone along the way:

The image generation took about 2 minutes on my machine.