I am successfully running a ROCm stable diffusion setup using PyTorch-ROCm on a 6900 XT. I have the following ROCm and HIP libraries installed with no system error messages:
However, LM Studio will only report that I have an OpenCL GPU installed. It recognizes the 16GB of VRAM, but still throws an error when I try to load a model. I can run phi3 easily and with speed on just the CPU. But I'd like to run a larger model on the GPU.
I am successfully running a ROCm stable diffusion setup using PyTorch-ROCm on a 6900 XT. I have the following ROCm and HIP libraries installed with no system error messages: However, LM Studio will only report that I have an OpenCL GPU installed. It recognizes the 16GB of VRAM, but still throws an error when I try to load a model. I can run phi3 easily and with speed on just the CPU. But I'd like to run a larger model on the GPU.