bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.28k stars 630 forks source link

ImportError: Using `bitsandbytes` 8-bit quantization requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes: `pip install -i https://pypi.org/simple/ bitsandbytes` #1100

Open hrmgxuni opened 8 months ago

hrmgxuni commented 8 months ago

System Info

# FROM python:3.9

# WORKDIR /code
# COPY ./requirements.txt /code/requirements.txt
# RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# COPY . .
# # RUN pip install --no-cache-dir --upgrade -r /requirements.txt

# # uvicorn reward_modeling:app --host 0.0.0.0 --port 6006 --reload
# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "6006"]
# # CMD ["./my-test-shell.sh"]

# Use the official Python 3.9 image
FROM python:3.10

# Set the working directory to /code
WORKDIR /code

# Copy the current directory contents into the container at /code
COPY ./requirements.txt /code/requirements.txt

RUN pip install -i https://pypi.org/simple/ bitsandbytes --upgrade
# Install requirements.txt 
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# Set up a new user named "user" with user ID 1000
RUN useradd -m -u 1000 user
# Switch to the "user" user
USER user
# Set home to the user's home directory
ENV HOME=/home/user \
    PATH=/home/user/.local/bin:$PATH

# Set the working directory to the user's home directory
WORKDIR $HOME/app

# Copy the current directory contents into the container at $HOME/app setting the owner to the user
COPY --chown=user . $HOME/app

# CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
CMD ["uvicorn", "api-server-0226:app", "--host", "0.0.0.0", "--port", "7860"]
# uvicorn api-server-0226:app --host 0.0.0.0 --port 7860

Reproduction

ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes

a93dd0ab982d32397caef31a93a40b06

When I use auto train on Hugging Face and select the options as shown in the screenshot, I encounter an error. I noticed that most models trigger this error. Could it be related to the fact that I'm using a free CPU?[](url)

Expected behavior

I hope the pros can help me figure out how to fix this error.

DaveChini commented 8 months ago

I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.

`import torch

print(torch.version)`

this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.

Kushalamummigatti commented 8 months ago

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image
cieske commented 8 months ago

python 3.10 transformer 4.38.2 bitsandbytes 0.42.0 accelerate 0.27.2 torch 2.0.1+cu117

torch.cuda.is_available() works, nvidia-smi works, import accelerate and bitsandbytes works, but 8-bit quantization doesn't works with same error

eneko98 commented 8 months ago

Did you manage to solve the issue?

nanxue2023 commented 8 months ago

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image

I have the same error.

seyf97 commented 8 months ago

Had the same error following this tutorial: https://huggingface.co/docs/peft/main/en/developer_guides/quantization on a Kaggle P100 GPU.

torch.cuda.is_available() returns True as well.

nanxue2023 commented 8 months ago

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image

I have the same error.

When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.

Titus-von-Koeller commented 8 months ago

Hey @younesbelkada,

I'm not sure what to make of this issue. Seems to me based on the error log that if anything it's more related to Transformers? Wdyt?

nerner94 commented 8 months ago

I had this on my local windows system and it was due to having the cpu version of pytorch, where a dependency of Accelerate requires the gpu version.

`import torch

print(torch.version)`

this should give something like 2.2.1+cu118 , if its says +cpu then you have the cpu version.

This worked for me! Thank you :)

jacqueline-he commented 7 months ago

fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).

didlawowo commented 7 months ago

get the same on my side on premise cluster with RTX

ashwin-js commented 7 months ago

fwiw, downgrading to a lower version of transformers helped resolve the issue for me (4.38.2 to 4.31.0).

This worked, Thanks !!

Huyueeer commented 7 months ago

@DaveChini我有同样的问题。但是当我打印它时,它只类似于+cu。 2.1.2+cu121 但遇到同样的错误有帮助吗?

图像

我有同样的错误。

当通过jupyter笔记本运行我的代码时,出现了这个错误。我使用本地 vscode 并运行代码,错误消失了。希望对你有帮助。也许它不适合交互式笔记本。

@DaveChini Am having the same issue. But when i print its similar to +cu only. 2.1.2+cu121 but am having the same error any help?

image

I have the same error.

When running my code through jupyter notebook, this error appeared. I used my local vscode and run the code, the error disappeared. Hope it helpful for u. Maybe it is not suitable for interactive notebook.

Yeah, me too. Same reason.

ibrahimberb commented 6 months ago

If you just installed the libraries such as pip install accelerate peft bitsandbytes transformers trl and running the Jupyter, you can try restart the kernel.

Adityaa-Sharma commented 6 months ago

clone the repo on colab notebook enabling tpu and use ngrok to run, worked for me, still not working on local host , but works on the link provided by the ngrok. I think it is a problem of cpu.

pradeep10kumar commented 2 months ago

RuntimeError: Failed to import transformers.integrations.bitsandbytes because of the following error (look up to see its traceback):

    CUDA Setup failed despite GPU being available. Please run the following command to get more information:

    python -m bitsandbytes

    Inspect the output of the command and see if you can locate CUDA libraries. You might need to add them
    to your LD_LIBRARY_PATH. If you suspect a bug, please take the information from python -m bitsandbytes
    and open an issue at: https://github.com/TimDettmers/bitsandbytes/issues

I am running from terminal.

I am working with following versions:

print("Torch version:", torch.version) Torch version: 1.13.1+cu116 print("TorchVision version:", torchvision.version) TorchVision version: 0.14.1+cu116 print("Torchaudio version:", torchaudio.version) Torchaudio version: 0.13.1+cu116 print("Bitsandbytes version:", bnb.version) Bitsandbytes version: 0.43.3

transformers.version '4.44.0'