Stability-AI / StableLM

StableLM: Stability AI Language Models
Apache License 2.0
15.83k stars 1.04k forks source link

Torch not compiled with CUDA enabled #31

Open mikecastrodemaria opened 1 year ago

mikecastrodemaria commented 1 year ago

Hi, on mac M1 I have the error related to Torch not compiled with CUDA enabled

Traceback (most recent call last): File "/start.py", line 6, in model.half().cuda() File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in cuda return self._apply(lambda t: t.cuda(device)) File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply param_applied = fn(param) File "/dev/miniforge3/lib/python3.10/site-packages/torch/nn/modules/module.py", line 749, in return self._apply(lambda t: t.cuda(device)) File "/dev/miniforge3/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Thanks

reinies commented 1 year ago

Hey all,

same for me. If I try to execute the "python app.py" which i got from: https://huggingface.co/spaces/stabilityai/stablelm-tuned-alpha-chat/tree/main I will get:

stablelm-tuned-alpha-chat git:(main) ✗ python app.py
Starting to load the model to memory
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:34<00:00,  8.60s/it]
Traceback (most recent call last):
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/app.py", line 12, in <module>
    "stabilityai/stablelm-tuned-alpha-7b", torch_dtype=torch.float16).cuda()
                                                                      ^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 905, in cuda
    return self._apply(lambda t: t.cuda(device))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
                    ^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/nn/modules/module.py", line 905, in <lambda>
    return self._apply(lambda t: t.cuda(device))
                                 ^^^^^^^^^^^^^^
  File "/Users/rspies/work/projects/stablelm-tuned-alpha-chat/my_project_env/lib/python3.11/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Is there any way to use cpu instead cuda (or similar)?

I tried it on a Mac with M1.

Best regards, Reinhard

AlexanderFillbrunn commented 1 year ago

Your M1 Mac does not have Cuda, which is a feature of Nvidia GPUs. From what I found out, theoretically you should use .to("mps") instead of .cuda() to replace Cuda with Metal, but this causes a different problem for me. The script crashes and I get the error described here: https://github.com/pytorch/pytorch/issues/99564.

mcmonkey4eva commented 1 year ago

The GGML project, for running LLMs on CPUs (including specifically mac support!) has an initial example project that can run StableLM: https://github.com/ggerganov/ggml/tree/master/examples/stablelm

There's also https://huggingface.co/cakewalk/ggml-q4_0-stablelm-tuned-alpha-7b/tree/main which supposedly works in llama.cpp