-
### Your current environment
```text
The output of `python collect_env.py`
```
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-triton-rocm==2.3.0
[pip3] sentence-transform…
-
See the `ROCm` and `Linux` link in https://iree.dev/guides/deployment-configurations/gpu-rocm/
They are https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html and https://www.amd.com/en/…
-
### Problem Description
Hi,
I can't run the tritonsrc/tune_flash.py to autotune the flash attention kernel with one specific problem size(just examples), the error message is like this:
![image](…
-
### System Info
If I follow installation guide on README, lion-pytorch is installed (see `requirements-dev.txt`). However, installing lion-pytorch cause uninstallation of PyTorch for ROCm (e.g., 2.…
-
### Describe the bug
https://foldingforum.org/viewtopic.php?p=358294
Essentially this bug. Work Units download endlessly and cannot be completed due to crashing.
### Steps To Reproduce
Steps to …
-
# Prerequisites
Before submitting your issue, please ensure the following:
- [x] I am running the latest version of PowerInfer. Development is rapid, and as of now, there are no tagged versions.
- […
-
If a commit has `mem_leak_check` and `rerun_disabled_tests` jobs running along with the regular `trunk` workflows jobs, we find that the HUD page for the commit doesn't list the artifacts for the regu…
-
Not working on RX 7800 XT. Please help.
-
when I run `pip install megablocks` I get this:
```
clang: error: unsupported option '--ptxas-options=-v'
clang: error: unsupported option '--generate-code=arch=compute_90,code=sm_90'…
-
Hi,
thanks for maintaining this repository. I was trying out ROCm today with a RX 7900 XTX, however, everything I tried seems to hang after building the OpenCL kernel. By hang I mean the applicatio…
oleid updated
1 month ago