-
GGML_HIP_UMA allow to use hipMallocManaged tu use UMA on AMD/HIP GPU.
I have a Ryzen 7940HS an made some test. Using UMA allow use much more memorie than reserved VRAM on the igpu and it is nice. I…
-
### System Info
An AMD Epyc system with 3 MI210.
Quite a complex setup. The system uses slurm to schedule batch jobs which are usually in the form of apptainer run containers. The image I'm using ha…
-
Hello! I'm trying to compile llama.cpp for ROCm on windows, the problem I'm having is that CMake uses CC=/opt/rocm/llvm/bin/clang and CXX=/opt/rocm/llvm/bin/clang++ but on windows the locations are at…
-
### Problem Description
CTest needs to verify all components are built and functional.
Missing
* VX_RPP
### Operating System
ALL
### CPU
ALL
### GPU
AMD Instinct MI300
### Other
_No respo…
-
Based on previous discussion here, AMD GPU acceleration has been a challenge because ROCm isn't built into org.freedesktop.Platform. A workaround that's been working really well with [Speech Note](htt…
-
Any advice for an older 6600XT card that seems to want "gfx1032" for llama.cpp? I've tried with Ubuntu 24 and 23.10 and get this crash:
```
rocBLAS error: Cannot read /opt/rocm/lib/rocblas/libra…
-
Tracking issue for [ROCm](https://github.com/RadeonOpenCompute/ROCm) derivations.
## Key
- Package
- Dependencies
## WIP
-
## Ready
-
## TODO
- [ ] Add CUDA options to all der…
-
Please consider adding rocm support for amd gpu
-
Changes need to be made in rocm-cmake to support installation of libraries at `$ROCM_PATH/lib/migraphx`.
First, a `PRIVATE` flag should be added to the `rocm_install_targets` that will install the …
-
For those who have some problems compiling llama-cpp-python with ROCm 6.0 (https://github.com/abetlen/llama-cpp-python), here is how I did it:
- Follow all the tutorials here for installing text-gene…