bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.19k stars 623 forks source link

BUG, CUDA SETUP,CUDA_VERSION=123_nomatmul #910

Open Flagami opened 10 months ago

Flagami commented 10 months ago

IT'S TRUE MY CUDA VERSION IS 12.3

----------nvcc--version-------- nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2023 NVIDIA Corporation Built on Fri_Nov__3_17:16:49_PDT_2023 Cuda compilation tools, release 12.3, V12.3.103 Build cuda_12.3.r12.3/compiler.33492891_0

--------------report information----------------- CUDA SETUP: Something unexpected happened. Please compile from source: git clone git@github.com:TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=123_nomatmul python setup.py install CUDA SETUP: Setup Failed!

Traceback (most recent call last): File "finetune_clm_lora.py", line 34, in import evaluate File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/evaluate/init.py", line 29, in from .evaluation_suite import EvaluationSuite File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/evaluate/evaluation_suite/init.py", line 10, in from ..evaluator import evaluator File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/evaluate/evaluator/init.py", line 17, in from transformers.pipelines import SUPPORTED_TASKS as SUPPORTED_PIPELINE_TASKS File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/transformers/pipelines/init.py", line 44, in from .audio_classification import AudioClassificationPipeline File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/transformers/pipelines/audio_classification.py", line 21, in from .base import PIPELINE_INIT_ARGS, Pipeline File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/transformers/pipelines/base.py", line 35, in from ..modelcard import ModelCard File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/transformers/modelcard.py", line 48, in from .training_args import ParallelMode File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/transformers/training_args.py", line 68, in from accelerate.utils import DistributedType File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/accelerate/init.py", line 3, in from .accelerator import Accelerator File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/accelerate/accelerator.py", line 35, in from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/accelerate/checkpointing.py", line 24, in from .utils import ( File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/accelerate/utils/init.py", line 131, in from .bnb import has_4bit_bnb_layers, load_and_quantize_model File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/accelerate/utils/bnb.py", line 42, in import bitsandbytes as bnb File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/init.py", line 6, in from . import cuda_setup, utils, research File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/research/init.py", line 1, in from . import nn File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/research/nn/init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/research/nn/modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/optim/init.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "/storage/.conda/envs/llama2-ch/lib/python3.8/site-packages/bitsandbytes/cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues
younesbelkada commented 10 months ago

Hi @Flagami Have you tried to compile from source:

CUDA_VERSION=123 make cuda12x
python setup.py develop
Flagami commented 9 months ago

Hi @Flagami Have you tried to compile from source:

CUDA_VERSION=123 make cuda12x
python setup.py develop

Thanks for your suggestion!