-
Hi, How can we run H20-GPT on AMD-GPUs using the AMD ROCm libraries.
One can easily run an inference server on Ollama using ROCm thereby H2O-GPT needs to use this Ollama server for inferencing.
…
-
It has been observed that performing block stride loops on AMD decreases performance, to increase performance use a direct mapping. Please see FEM kernels under apps.
-
### Checklist
- [X] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [X] The issue exists on a clean inst…
-
### What is the issue?
I have 8 AMD 7900XTX cards in llama.cpp to limit access to certain GPUs, I use the HIP_VISIBLE_DEVICES command and it works correctly. However, if I want to limit GPU access fo…
-
### What is the issue?
Report date: 2024-11-07
During a custom compile of ollama 0.4.0 on Linux (POP OS 22.04) for AMD ROCm GPUs (AMD 6650 GPU), the initial compile works.
However, when tryi…
-
I install the packages using
```
FORCE_ONLY_CUDA=1 pip install -U -v --no-build-isolation git+https://github.com/rusty1s/pytorch_cluster.git
FORCE_ONLY_CUDA=1 pip install -U -v --no-build-isolati…
-
### Description of the issue
The Shadow Generations portion of the game crashes whenever I Chaos Control. The Sonic Generations portion runs flawlessly. From what I've heard and seen, this only hap…
-
Are there any providers giving access to AMD Gpus? Could they be added to the list?
-
Any plan for this? Thanks!
-
From fortressforever-2013/fortressforever#9