bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
5.82k stars 592 forks source link

RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback): #1093

Open SumaiyaSultan2002 opened 5 months ago

SumaiyaSultan2002 commented 5 months ago

System Info

The load_in_4bit and load_in_8bit arguments are deprecated and will be removed in the future versions. Please, pass a BitsAndBytesConfig object in quantization_config argument instead. Traceback (most recent call last): File "c:\SQl coder\app.py", line 22, in model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\modeling_utils.py", line 3026, in from_pretrained hf_quantizer.validate_environment( File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\quantizers\quantizer_bnb_8bit.py", line 62, in validate_environment raise ImportError( ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes (sqlenv) PS C:\SQl coder> pip install -i https://pypi.org/simple/ bitsandbytes Looking in indexes: https://pypi.org/simple/, https://pypi.ngc.nvidia.com Collecting bitsandbytes Downloading bitsandbytes-0.42.0-py3-none-any.whl.metadata (9.9 kB) Requirement already satisfied: scipy in c:\sql coder\sqlenv\lib\site-packages (from bitsandbytes) (1.12.0) Requirement already satisfied: numpy<1.29.0,>=1.22.4 in c:\sql coder\sqlenv\lib\site-packages (from scipy->bitsandbytes) (1.26.4) Downloading bitsandbytes-0.42.0-py3-none-any.whl (105.0 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 105.0/105.0 MB 6.2 MB/s eta 0:00:00 Installing collected packages: bitsandbytes Successfully installed bitsandbytes-0.42.0 (sqlenv) PS C:\SQl coder> & "c:/SQl coder/sqlenv/Scripts/python.exe" "c:/SQl coder/app.py" The load_in_4bit and load_in_8bit arguments are deprecated and will be removed in the future versions. Please, pass a BitsAndBytesConfig object in quantization_config argument instead. False

===================================BUG REPORT=================================== C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\cuda_setup\main.py:167: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg)

CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... The following directories listed in your path were found to be non-existent: {WindowsPath('/usr/local/cuda/lib64')} DEBUG: Possible options found for libcudart.so: set() CUDA SETUP: PyTorch settings found: CUDA_VERSION=118, Highest Compute Capability: 8.6. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.so... argument of type 'WindowsPath' is not iterable CUDA SETUP: Problem: The main issue seems to be that the main CUDA runtime library was not detected. CUDA SETUP: Solution 1: To solve the issue the libcudart.so location needs to be added to the LD_LIBRARY_PATH variable
CUDA SETUP: Solution 1a): Find the cuda runtime library via: find / -name libcudart.so 2>/dev/null CUDA SETUP: Solution 1b): Once the library is found add it to the LD_LIBRARY_PATH: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:FOUND_PATH_FROM_1a CUDA SETUP: Solution 1c): For a permanent solution add the export from 1b into your .bashrc file, located at ~/.bashrc
CUDA SETUP: Solution 2: If no library was found in step 1a) you need to install CUDA. CUDA SETUP: Solution 2a): Download CUDA install script: wget https://raw.githubusercontent.com/TimDettmers/bitsandbytes/main/cuda_install.sh CUDA SETUP: Solution 2b): Install desired CUDA version to desired location. The syntax is bash cuda_install.sh CUDA_VERSION PATH_TO_INSTALL_INTO. CUDA SETUP: Solution 2b): For example, "bash cuda_install.sh 113 ~/local/" will download CUDA 11.3 and install into the folder ~/local Traceback (most recent call last): File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\utils\import_utils.py", line 1383, in _get_module return importlib.import_module("." + module_name, self.name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\sumai\anaconda\Lib\importlib__init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 1204, in _gcd_import File "", line 1176, in _find_and_load File "", line 1147, in _find_and_load_unlocked File "", line 690, in _load_unlocked File "", line 940, in exec_module File "", line 241, in _call_with_frames_removed File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\integrations\bitsandbytes.py", line 11, in import bitsandbytes as bnb File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes__init__.py", line 6, in from . import cuda_setup, utils, research File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\research\init.py", line 1, in from . import nn File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\research\nn\init.py", line 1, in from .modules import LinearFP8Mixed, LinearFP8Global File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\research\nn\modules.py", line 8, in from bitsandbytes.optim import GlobalOptimManager File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\optim\init__.py", line 6, in from bitsandbytes.cextension import COMPILED_WITH_CUDA File "C:\SQl coder\sqlenv\Lib\site-packages\bitsandbytes\cextension.py", line 20, in raise RuntimeError(''' RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "c:\SQl coder\app.py", line 22, in model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\models\auto\auto_factory.py", line 563, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\modeling_utils.py", line 3391, in from_pretrained hf_quantizer.preprocess_model( File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\quantizers\base.py", line 166, in preprocess_model return self._process_model_before_weight_loading(model, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\quantizers\quantizer_bnb_8bit.py", line 219, in _process_model_before_weight_loading from ..integrations import get_keys_to_not_convert, replace_with_bnb_linear File "", line 1229, in _handle_fromlist File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\utils\import_utils.py", line 1373, in getattr module = self._get_module(self._class_to_module[name]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\SQl coder\sqlenv\Lib\site-packages\transformers\utils\import_utils.py", line 1385, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

    CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Reproduction

https://github.com/defog-ai/sqlcoder/blob/main/defog_sqlcoder_colab.ipynb

Expected behavior

i want to run defog.ai SQLCoder-7b-2`import streamlit as st import torch from transformers import AutoTokenizer, AutoModelForCausalLM import sqlparse import sqlite3

Model loading and configuration

model_name = "defog/sqlcoder-7b-2" tokenizer = AutoTokenizer.from_pretrained(model_name)

if torch.cuda.is_available(): available_memory = torch.cuda.memory_allocated() if available_memory > 15e9: model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, torch_dtype=torch.float16, device_map="auto", use_cache=True, ) else: model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, load_in_8bit=True, device_map="auto", torch_dtype=torch.float16, use_cache=True, ) else: model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True, use_cache=True )

prompt = """### Task Generate a SQL query to answer [QUESTION]{question}[/QUESTION]

Instructions

Database Schema

CREATE TABLE website_support_ticket(id INTEGER PRIMARY KEY, sla_active BOOLEAN, --SLA is active if it is true else it is inactive asset_id INTEGER, --Space for which the ticket is created equipment_id INTEGER, --Equipment for which the ticket is created equipment_location_id INTEGER, --Space where the Equipment is located maintenance_team_id INTEGER, --Maintenance Team that is responsible for the ticket actions at_start_mro BOOLEAN, --Photo is required to start a work order at_done_mro BOOLEAN, --Photo is required to close a work order at_review_mro BOOLEAN, --Photo is required to review a work order mro_order_id INTEGER, --Order related to the ticket employee_id INTEGER, --Employee related to the ticket pause_reason_id INTEGER, --Reason for Pause equip_block_id INTEGER, --Block of an equipment for which the ticket is created space_block_id INTEGER, --Block of an space for which the ticket is created requestee_id INTEGER, --Requestor of the ticket region_id INTEGER, --Region of the ticket is_reopen BOOLEAN, --Ticket was reopned if this is set to True reopen_count INTEGER, --Number of times this ticket was reopened on_hold_date TIMESTAMP WITHOUT TIME ZONE, --Date on which the ticket was moved to On-Hold doc_count INTEGER, --Count of Attachments sla_end_date TIMESTAMP WITHOUT TIME ZONE, --Planned End date for SLA priority_id INTEGER, --Priority of the Ticket category_id INTEGER, --Category of the Problem sub_category_id INTEGER, --Sub Category of the Problem state_id INTEGER, --Status of the ticket (Open, InProgress, Closed, Paused) company_id INTEGER, --Company of the ticket close_time TIMESTAMP WITHOUT TIME ZONE, --Ticket Closed Date time closed_by_id INTEGER, --Technician who closed the ticet ticket_type CHARACTER VARYING, --Proactive or Reactive sla_status CHARACTER VARYING, --To show within SLA or SLA elapsed state_category_id CHARACTER VARYING, --Category to which the Status belongs to subject CHARACTER VARYING, --Subject line of the Problem issue_type CHARACTER VARYING, --Issue Type of the Ticket close_comment CHARACTER VARYING, --Comments that was enetered while closing the ticket current_escalation_level CHARACTER VARYING, --To show the current escalationlevel type_category CHARACTER VARYING, --Type category of the ticket state_name CHARACTER VARYING, --State to which the site belongs to city_name CHARACTER VARYING, --City to which the site belongs to last_commented_by CHARACTER VARYING, --Comment region CHARACTER VARYING, --Region of the ticket mro_state CHARACTER VARYING, --Status of the Work order

);

CREATE TABLE res_company ( id INTEGER PRIMARY KEY, name VARCHAR(20) );

CREATE TABLE mro_maintenance_team( id INTEGER INTEGER PRIMARY KEY, name VARCHAR VARCHAR(20) );

CREATE TABLE mro_equipment_location( id INTEGER PRIMARY KEY, name VARCHAR(50) );

CREATE TABLE mro_equipment( id INTEGER PRIMARY KEY, name VARCHAR(50) );

CREATE TABLE website_support_ticket_state( id INTEGER PRIMARY KEY, name VARCHAR(50));

CREATE TABLE mro_order( id INTEGER PRIMARY KEY, name VARCHAR(50));

CREATE TABLE website_support_ticket_category( id INTEGER PRIMARY KEY, name VARCHAR(50)) ;

CREATE TABLE website_support_ticket_subcategory( id INTEGER PRIMARY KEY, name VARCHAR(50));

CREATE TABLE website_support_ticket_priority( id INTEGER PRIMARY KEY, name VARCHAR(50));

-website_support_ticket.company_id can be joined with res_company.id -website_support_ticket.maintenance_team_id can be joined with mro_maintenance_team.id -website_support_ticket.asset_id can be joined with mro_equipment_location.id -website_support_ticket.equipment_id can be joined with mro_equipment.id -website_support_ticket.state_id can be joined with website_support_ticket_state.id -website_support_ticket.mro_order_id can be joined with mro_order.id -website_support_ticket.category_id can be joined with website_support_ticket_category.id -website_support_ticket.sub_category_id can be joined with website_support_ticket_subcategory.id -website_support_ticket.priority_id can be joined with website_support_ticket_priority.id

Answer

Given the database schema, here is the SQL query that answers [QUESTION]{question}[/QUESTION] [SQL] """ def generate_query(question): updated_prompt = prompt.format(question=question) inputs = tokenizer(updated_prompt, return_tensors="pt").to("cuda") generated_ids = model.generate( **inputs, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, max_new_tokens=400, do_sample=False, num_beams=1, ) outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)

torch.cuda.empty_cache()
torch.cuda.synchronize()
return sqlparse.format(outputs[0].split("[SQL]")[-1], reindent=True)

def execute_sql(question, db_file): query = generate_query(question)

conn = sqlite3.connect(db_file)
cursor = conn.cursor()

try:
    cursor.execute(query)

    # Fetch column names
    columns = [col[0] for col in cursor.description]

    # Fetch results into a pandas DataFrame
    df = pd.DataFrame(cursor.fetchall(), columns=columns)

    # Print the result as a table
    return df.to_markdown(index=False)
except sqlite3.OperationalError as e:
    if "ILIKE" in str(e):
        query = query.replace("ILIKE", "LIKE")
        return execute_query(query, db_file)
except sqlite3.Error as e:
    print("Error executing query:", e)
    return None

finally:
    cursor.close()
    conn.close()

Streamlit app

st.title("SQL Code Generator")

Input field for the question

user_question = st.text_input("Enter your question about the database:")

Button to generate the SQL query

if st.button("Generate SQL"): if user_question:

Generate SQL query and display it

    generated_sql = generate_query(user_question)
    st.write("Generated SQL Query:")
    st.code(generated_sql)

    # Connect to the database (replace with your database file path)
    db_file = "your_database.db"
    if db_file:
        # Execute the query and display the results
        result = execute_sql(user_question, db_file)
        if result:
            st.write("Results:")
            st.markdown(result)
        else:
            st.write("No results found.")
else:
    st.warning("Please enter a question.")

`

this is the code i am trying to run

Qu3tzal commented 5 months ago

pip install accelerate

younesbelkada commented 5 months ago

Hi @SumaiyaSultan2002 As suggested by @Qu3tzal - make sure to install accelerate and bitsandbytes on a GPU env. If you are using colab, make sure to restart a new kernel

tejarao1156 commented 4 months ago

I tried downloading accelerate and bitsandbytes. it is not woking. still getting this error: RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

Qu3tzal commented 4 months ago

You tried downloading? Did you install them in the python environment you're using?

tejarao1156 commented 4 months ago

This is my issue. System Info GOOGLE COLLAB PRO V100 GPU

Reproduction ===================================BUG REPORT=================================== The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} The following directories listed in your path were found to be non-existent: {PosixPath('8013'), PosixPath('http'), PosixPath('//172.28.0.1')} The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-h2ery00lgftj --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} The following directories listed in your path were found to be non-existent: {PosixPath('/datalab/web/pyright/typeshed-fallback/stdlib,/usr/local/lib/python3.10/dist-packages')} The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... DEBUG: Possible options found for libcudart.so: {PosixPath('/usr/local/cuda/lib64/libcudart.so')} CUDA SETUP: PyTorch settings found: CUDA_VERSION=117, Highest Compute Capability: 7.0. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so... libcusparse.so.11: cannot open shared object file: No such file or directory CUDA SETUP: Something unexpected happened. Please compile from source: git clone https://github.com/TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=117 make cuda11x_nomatmul python setup.py install /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0', 'libcudart.so.12.1', 'libcudart.so.12.2'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! If you run into issues with 8-bit matmul, you can try 4-bit quantization: https://huggingface.co/blog/4bit-transformers-bitsandbytes warn(msg) RuntimeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1471 try: -> 1472 return importlib.import_module("." + module_name, self.name) 1473 except Exception as e:

22 frames RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

python -m bitsandbytes

Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1472 return importlib.import_module("." + module_name, self.name) 1473 except Exception as e: -> 1474 raise RuntimeError( 1475 f"Failed to import {self.name}.{module_name} because of the following error (look up to see its" 1476 f" traceback):\n{e}"

RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

CUDA Setup failed despite GPU being available. Please run the following command to get more information:

python -m bitsandbytes

Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues===================================BUG REPORT===================================

================================================================================ The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} The following directories listed in your path were found to be non-existent: {PosixPath('8013'), PosixPath('http'), PosixPath('//172.28.0.1')} The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-h2ery00lgftj --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} The following directories listed in your path were found to be non-existent: {PosixPath('/datalab/web/pyright/typeshed-fallback/stdlib,/usr/local/lib/python3.10/dist-packages')} The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... DEBUG: Possible options found for libcudart.so: {PosixPath('/usr/local/cuda/lib64/libcudart.so')} CUDA SETUP: PyTorch settings found: CUDA_VERSION=117, Highest Compute Capability: 7.0. CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so... libcusparse.so.11: cannot open shared object file: No such file or directory CUDA SETUP: Something unexpected happened. Please compile from source: git clone https://github.com/TimDettmers/bitsandbytes.git cd bitsandbytes CUDA_VERSION=117 make cuda11x_nomatmul python setup.py install /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0', 'libcudart.so.12.1', 'libcudart.so.12.2'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:183: UserWarning: WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU! If you run into issues with 8-bit matmul, you can try 4-bit quantization: https://huggingface.co/blog/4bit-transformers-bitsandbytes warn(msg) RuntimeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1471 try: -> 1472 return importlib.import_module("." + module_name, self.name) 1473 except Exception as e:

22 frames RuntimeError: CUDA Setup failed despite GPU being available. Please run the following command to get more information:

python -m bitsandbytes

Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

The above exception was the direct cause of the following exception:

RuntimeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name) 1472 return importlib.import_module("." + module_name, self.name) 1473 except Exception as e: -> 1474 raise RuntimeError( 1475 f"Failed to import {self.name}.{module_name} because of the following error (look up to see its" 1476 f" traceback):\n{e}"

RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

CUDA Setup failed despite GPU being available. Please run the following command to get more information:

python -m bitsandbytes

Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

Expected behavior

spydaz commented 3 weeks ago

this is still unsolved !!!

why has this just randomly occured ?? one minuite it was all working fine on the laptop and now bitsandbytes is the cause of every error ? Windows !