-
## š Bug
Simple operations lead to stack smashing after upgrade to ROCm 3.0 and recompilation
## To Reproduce
Steps to reproduce the behavior:
```
import torch
torch.zeros(100).cuda()
*ā¦
-
### Description of errors
As per https://rocm.docs.amd.com/projects/install-on-windows/en/develop/conceptual/release-versioning.html#windows-builds-from-source building on Windows is not supported. Iā¦
-
### System Info
Ryzen 5 5500u with integrated GPU
### Reproduction
`cmake` -DCOMPUTE_BACKEND=hip -S `
When i try to run this command I get an Error like this
``
-- Configuring bitsaā¦
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmengine/issues) and [Discussions](https://github.com/open-mmlab/mmengine/discussions) but cannot get the expected help.
ā¦
-
## Issue description
on my system many of the largest include dirs are from rocm packages
```
6.0M /nix/store/l1w0j967cmbwwvs1hiwwgkr0zw88jc4y-rocm-llvm-clang-5.4.4/lib/clang/15.0.0/include/ā¦
-
### Describe the bug
With a fresh install of 1.15, Exllamav2_HF loads a model just fine... However, when I do a local install of exllamav2, then both it and the Exllamav2_HF loaders break ( errors bā¦
-
Without "llvm":
` * Package: sci-libs/rocSPARSE-6.0.2:0/6.0
* Repository: rocm-bleeding-edge
* Maintainer: sci@gentoo.org candrews@gentoo.org,gentoo@holzke.net
* USE: abi_x86_64 amdā¦
-
### Problem Description
I get these errors often from [various applications](https://github.com/pytorch/pytorch/issues/134208), this one if from ComfyUI.
Is scaled_dot_product_attention part of flā¦
-
**Is your feature request related to a problem? Please describe:**
Right now, it is not possible to use this library on AMD hardware because it is ABI-linked to either the CPU or CUDA version of Torcā¦
-
I have posted this issue on AMD's side but no response there...
https://github.com/ROCm/flash-attention/issues/73
As the same issue happens in main branch as well, I thought I'd try here...
Stepā¦