-
A repetitive issue I see coming up over and over again is people not being able to run models on their hardware for any number of reasons, one of the biggest being that llama.cpp has not incorporated …
-
Hello everyone
We recently added a second Radeon Pro VII to our simulation system. Unfortunately, though, it seems the GPUs do not want to talk to each other, although they are directly connected w…
-
### Suggestion Description
Docker
### Operating System
_No response_
### GPU
_No response_
### ROCm Component
_No response_
-
### Problem Description
I have AMD Radeon RX 7800 XT 16 GB, I couldn't select it in the list.
I'm having problem with soft locks when generating stable diffusion images. I need to restart lightd…
-
i get a segfault when i install the deps and load into comfy
-
### 请提出你的问题
AssertionError: multinomial op is not supported on ROCM yet.
请问针对于在rocm平台上不支持paddle.multinomial()有什么替代方案么?
-
Running `clEnqueueSVMMemcpy(queue, CL_TRUE, dst, src, size, 0, NULL, NULL);` with dst allocated with `clSVMAlloc` and src allocated by the system (e.g. posix_memalign) triggers a segmentation fault:
…
-
### Problem Description
The example host has four GPUs:
```
$ rocm-smi
========================= ROCm System Management Interface =========================
===================================…
-
### What is the issue?
When running model concurrency, the scheduler is unaware of WDDM KMD memory allocations on system memory and just looks like GPU reported memory usage, which can lead to loadin…
-
At least for the fuzzyHSA project would it be possible to set up of a pool of test systems to running these fuzzers and openly reporting results? Even if they are of limited usefulness to start.
…
2eQTu updated
4 months ago