jankais3r / LLaMA_MPS

Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.
GNU General Public License v3.0
583 stars 47 forks source link

MPS device support #6

Closed MZeydabadi closed 1 year ago

MZeydabadi commented 1 year ago

I get the following error: File "/home/LLaMA_MPS/llama/model.py", line 102, in __init__ self.cache_k = torch.zeros( RuntimeError: PyTorch is not linked with support for mps devices I ran the code on this environment: PyTorch version: 1.13.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: None OS: Linux 5.14.0-162.6.1.el9_1.0.1.x86_64 CMake version: 3.20.2 Python version: Python 3.9.14 Python platform: Linux-5.14.0-162.6.1.el9_1.0.1.x86_64-x86_64-with-glibc2.34 Is CUDA available: True CUDA_MODULE_LOADING set to: GPU models and configuration: 0,1,2,3 Nvidia driver version: 525.60.13 cuDNN version: 8500 Any idea what is going wrong?

jankais3r commented 1 year ago

Hi, this repo only works under macOS on M1/M2 Apple devices. You seem to be running on Linux machine with Nvidia GPU, which will not work. I suggest you check out other LLaMA implementations.